GET /api/guest-articles/?format=api
HTTP 200 OK
Allow: GET, POST, DELETE, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

[
    {
        "title": "Vietnam Tour Package in California – Customized Luxury Travel Experience",
        "description": "A Vietnam tour package in California also focuses on providing authentic cultural experiences. Travelers can immerse themselves in Vietnam’s rich traditions by exploring local markets, trying traditional Vietnamese dishes, and participating in cultural activities.",
        "content": "<p>Traveling to Vietnam has become increasingly popular among global travelers who seek a blend of natural beauty, cultural heritage, and modern experiences. A <strong><a href=\"https://midasiaroutes.com/trip/vietnam-tour-package-in-california\">Vietnam tour package in California</a></strong> is the perfect option for those who want a well-planned and hassle-free journey to this stunning Southeast Asian destination. With professionally designed itineraries and premium services, these travel packages offer everything needed for a memorable vacation.</p>\n<p>A well-organized Vietnam tour package in California includes carefully curated travel plans that cover the most iconic destinations in Vietnam. Travelers can explore breathtaking locations such as Ha Long Bay, Hanoi, Ho Chi Minh City, and Hoi An. Each destination offers a unique experience, from serene landscapes and historical landmarks to vibrant markets and delicious local cuisine. This ensures that travelers get a complete and enriching travel experience.</p>\n<p>One of the biggest advantages of choosing a Vietnam tour package in California is the level of convenience it provides. Instead of planning every detail separately, travelers can rely on experts to handle accommodations, transportation, sightseeing, and more. This saves time and effort while ensuring a smooth travel experience from start to finish.</p>\n<p>Customization is another key benefit of a Vietnam tour package in California. Travelers can personalize their itinerary according to their preferences, whether they are looking for a luxury vacation, a cultural exploration, or an adventure-filled trip. From selecting destinations to choosing activities, everything can be tailored to meet individual needs. This flexibility makes the travel experience more enjoyable and satisfying.</p>\n<p>Luxury and comfort are also major highlights of a Vietnam tour package in California. Travelers can stay in premium hotels, enjoy guided tours, and experience seamless transportation throughout their journey. Many packages also include meals, local experiences, and entry tickets to major attractions, making the trip stress-free and enjoyable.</p>",
        "topics": [],
        "user": {
            "pk": 166497,
            "forum_user": {
                "id": 166260,
                "user": 166497,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a3a033d0d2b92e2650cd81d0e86dd0f8?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-03T11:56:00.621498+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "savitaexports",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "vietnam-tour-package-in-california-customized-luxury-travel-experience",
        "pk": 4588,
        "published": false,
        "publish_date": "2026-04-03T12:20:28.939766+02:00"
    },
    {
        "title": "Participatory design to generate musical materials",
        "description": "2019.20 Artistic Research Residency.\r\nBrice Gatinet.\r\nIn collaboration with the Musical Representations Team IRCAM-STMS - Philippe Esling",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">2019.20 Artistic Research Residency</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p>\"<strong>Development of software component using participatory design to generate musical materials using ACIDS ongoing project based on Artificial Intelligence\"</strong><br />In collaboration with the<span><span>&nbsp;</span><a href=\"https://www.stms-lab.fr/team/representations-musicales/\">Musical Representations Team</a></span><span>&nbsp;</span>IRCAM-STMS -<span>&nbsp;</span><a href=\"https://www.stms-lab.fr/person/philippe-esling/\">Philippe Esling</a></p>\r\n<p>This project uses a participatory design method to explore and augment research already taking place within different axes of the ACIDS team. This team based in IRCAM is specialized in Artificial Creative Intelligence and Data Inference. In my work, I intend to engage directly with research oriented around several approaches, including learning-based inductive orchestration, orchestral waveform generation, co-improvisation and learning. The outcome will be used to create orchestration based on a piano score, using real-time improvisation from the computer during the live performance to generate a tape part based on a specific dataset encompassing piano and electric guitar sounds. These results will be harnessed for the creation of a large-scale piece for piano, electric guitar, ensemble and electronics. Broadly speaking, my work will be realized in three phases: 1) an analysis of different on-going projects being undertaken by the ACIDS team in order to ascertain the needs and expectations of the distinct software to be developed during my residence at IRCAM, 2) a conceptualization and implementation of specific prototypes for software tools, and 3) a post-creation evaluation to measure the usability of the resultant product.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Brice Gatinet</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biography</h3>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>Brice Gatinet is a French composer currently living in Montreal. Along his musical path, he has discovered many ways to express unique musical ideas, infusing his works with elements of jazz, improvisation, death metal and classical. These influences are at the heart of his writing and musical thought, where technique, poetry and structure are intimately linked to create a personal expressive dynamic.</span></p>\r\n<p><span>In France, Gatinet studied musicology at the Grenoble University, as well as Jazz and Musique Actuelle at the Chambery conservatory. Since moving to Montreal, he has obtained a Masters in Composition at the Université de Montreal, and he is currently completing a doctorate in Music Composition in McGill University under the direction of Philippe Leroux. In 2016, he received a funded three-month residency at the Casa Velasquez in Madrid, and in the next year, he has already committed to commissions by Orchestre Symphonique de Montréal and le Nouvel Ensemble Moderne among others.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Links</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>https://soundcloud.com/brice-gatinet</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 40,
                "name": "Orchestration",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "participatory-design-to-generate-musical-materials",
        "pk": 212,
        "published": false,
        "publish_date": "2019-03-27T17:12:42+01:00"
    },
    {
        "title": "Poo-Wee-Pee: Sound in toilet and concrete emotion by Wai Yau SHIU (HK)",
        "description": "Poo-Wee-Pee is a toilet-related sonic demonstration with video materials.",
        "content": "<p><span>Poo-Wee-Pee is a toilet-related sonic demonstration with video materials. Different faeces, representing various timbre, move in the animation with physical logic. Their movement affects the projection of </span><span>sound in different speakers. On the other hand, the audience creates designed sound according to visual materials by unusual 'instruments'. </span><span>Also there is a discussion on how the sonic elements could evoke the secondary negative emotions directly, such as embarrassing and disgusting.</span></p>",
        "topics": [
            {
                "id": 1642,
                "name": "animation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3439,
                "name": "fart",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3441,
                "name": "live-processing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1835,
                "name": "maxmspjitter",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3438,
                "name": "pee",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3437,
                "name": "poop",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3440,
                "name": "secondaryemotion",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 38645,
            "forum_user": {
                "id": 38594,
                "user": 38645,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Profile_Picture_SHIU_Wai_Yau_William.JPG",
                "avatar_url": "/media/cache/de/e6/dee681ed23d80c79667c07330058d7e3.jpg",
                "biography": "SHIU Wai Yau, William is a Hong Kong-born composer, conductor and performer of piano, clarinet and French Horn. William obtained a master’s degree in music composition at KASK & Conservatorium, Ghent, Belgium. He obtained a Bachelor of Arts degree in Music Composition from Hong Kong Baptist University.\n\nWilliam’s compositions span a wide range of genres from chamber ensembles, orchestra, open-form to multimedia works. His music has been performed and exhibited in Hong Kong, Netherlands, Belgium, Austria, Portugal, etc. Ensemble/organisation William have collaborated with including Platypus Ensemble (Austria), Peter Benoit Fonds (Belgium), K622 Clarinet Ensembles (Hong Kong), Orkest de Ereprijs (Netherlands) and Cello Wercken Zutphen (Netherlands). In 2024, he was awarded the Ereprijs Xtra (Orkest de Ereprijs, Phion, NJO) commission to compose a new work for the 2025 Andriessen Festival.\n\nIn 2025 he formed ‘The Maniacs’. The ensemble Interprets any crazy ideas ‘hilariously but seriously’ from sound-based performances. They regularly perform innovative and bold contemporary music works around the world, alongside new compositions by local composers.",
                "date_modified": "2025-11-05T04:46:09.010884+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "williamshiu",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "poo-wee-pee-sound-in-toilet-and-concrete-emotion-by-wai-yau-shiu-hk",
        "pk": 3736,
        "published": false,
        "publish_date": "2025-10-03T10:31:33+02:00"
    },
    {
        "title": "Teaching and developing soundworks-playground@2.0.0 by Garth Paine",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<p></p>\r\n<p><img src=\"/media/uploads/garth_headshot.jpg\" alt=\"\" max-width=\"1334\" max-height=\"889\" /></p>\r\n<p>Presented by Garth Paine</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/garthpaine/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>Over the last several years, with the support of the Sound Music Movement Interaction team at IRCAM, Dr. Garth Paine has been teaching the playground framework for collective music performance across cell phones and using it in his own practice. This submission proposes a presentation about the experiences of teaching and using this framework, and, three short performances on the audiences cell phones, two by students and one by Dr. Paine.</p>",
        "topics": [],
        "user": {
            "pk": 92005,
            "forum_user": {
                "id": 91891,
                "user": 92005,
                "first_name": "Pierre",
                "last_name": "Provence",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/bb2fd89ea3d0aef48035393334059d96?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-04-30T12:50:45.553350+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1009,
                        "forum_user": 91891,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "provence",
            "first_name": "Pierre",
            "last_name": "Provence",
            "bookmarks": []
        },
        "slug": "teaching-and-developing-soundworks-playground200",
        "pk": 3354,
        "published": true,
        "publish_date": "2025-03-13T15:55:25+01:00"
    },
    {
        "title": "Vietnam Tour Package in Florida – Luxury Customized Travel Experience",
        "description": "Explore Vietnam with a customized Vietnam tour package in Florida. Enjoy luxury travel, cultural experiences, and hassle-free planning for your dream vacation.",
        "content": "<p>Planning an international vacation can be exciting yet overwhelming, especially when choosing the right destination and travel partner. A <strong><a href=\"https://midasiaroutes.com/trip/vietnam-tour-package-in-florida\">Vietnam tour package in Florida</a></strong> offers travelers a unique opportunity to explore one of Southeast Asia&rsquo;s most beautiful countries with ease and comfort. Designed for modern travelers, these packages combine luxury, culture, and adventure into a seamless travel experience.</p>\n<p>A well-crafted Vietnam tour package in Florida includes everything needed for a stress-free journey. From premium accommodations to guided sightseeing tours, every detail is carefully planned. Travelers can enjoy iconic destinations such as Ha Long Bay, Hanoi, Ho Chi Minh City, and Hoi An, all while experiencing Vietnam&rsquo;s rich history and vibrant culture. These destinations are known for their breathtaking landscapes, bustling markets, and authentic cuisine.</p>\n<p>One of the biggest advantages of choosing a Vietnam tour package in Florida is customization. Travelers can personalize their itinerary based on their preferences, whether they want a relaxing holiday, an adventurous trip, or a cultural exploration. These tailored experiences ensure that every traveler gets the most out of their journey, making it truly memorable.</p>\n<p>Luxury and comfort are key highlights of a Vietnam tour package in Florida. Travelers stay in high-quality hotels, enjoy guided tours, and benefit from smooth transportation arrangements. Many packages also include meals, entry fees, and local experiences, ensuring a hassle-free vacation. This level of convenience allows travelers to focus entirely on enjoying their trip without worrying about logistics.</p>\n<p>Another important feature of a Vietnam tour package in Florida is the inclusion of cultural experiences. Travelers get the chance to explore local traditions, taste authentic Vietnamese dishes, and interact with local communities. From street food tours in Hanoi to boat rides in the Mekong Delta, every activity offers a deeper connection to the destination.</p>\n<p>For those seeking adventure, a Vietnam tour package in Florida offers a variety of exciting activities. Travelers can cruise through the stunning limestone landscapes of Ha Long Bay, explore ancient towns like Hoi An, or experience the vibrant nightlife of Ho Chi Minh City. These experiences provide a perfect balance between relaxation and exploration.</p>",
        "topics": [],
        "user": {
            "pk": 166497,
            "forum_user": {
                "id": 166260,
                "user": 166497,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a3a033d0d2b92e2650cd81d0e86dd0f8?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-03T11:56:00.621498+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "savitaexports",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "vietnam-tour-package-in-florida-luxury-customized-travel-experience",
        "pk": 4587,
        "published": false,
        "publish_date": "2026-04-03T12:11:43.002303+02:00"
    },
    {
        "title": "Musical Interaction between AI and Humans: Convergence of AI and Humans in 21st Century Contemporary Music by Yuseon Won",
        "description": "This article tries to explore musical relation between AI and Humans systemically.",
        "content": "<h2>Musical Interaction between AI and Humans: Convergence of AI and Humans in 21st Century Contemporary Music</h2>\r\n<h2>- Yuseon Won</h2>\r\n<p>AI has increasingly influenced music composition, performance, and reception, leading to a growing scholarly interest in exploring its underlying musical mechanisms and artistic potential. However, much of the existing research tends to draw a rigid distinction between 'AI-generated music' and 'human-composed music,' revealing clear limitations. Questions such as \"Can a machine, rather than a human, be the agent of creation?\", and \"What unique artistic value does AI possess that distinguishes it from humans?\" underscore the challenges that current studies frequently confront. Many of these studies continue to reflect underlying anxieties about AI, primarily focusing on evaluating the artistic capabilities or technical proficiency of AI as something inherently separate from human creators. Even in research that acknowledges AI's role in collaborative creative processes, there often remains a lack of clear explanation regarding how human creativity and computational power interact and complement each other in practice.</p>\r\n<p>This study aims to expand the scope of research on AI in music by examining the specific ways in which humans and AI collaborate in the creation of 21st-century contemporary music, and exploring the artistic implications of these collaborations. Contemporary music is once again evolving as it engages with the profound influence of AI, moving towards a convergence between AI and human creators. Significantly, while the evolution of music has traditionally centered on the novelty of 'materials'&mdash;such as tonality, atonality, and form&mdash;it is now shifting towards an emphasis on the novelty of 'subjects,' particularly through the introduction of AI as a creative partner.</p>\r\n<p>Accordingly, 1) this study will categorize the relationship between humans and AI in the creative process into three types: 'complete substitution,' 'partial substitution,' and 'collaborator,' analyzing the distinct characteristics and nuances of each category. 2) Subsequently, the study will focus on 'AI improvisation' as a prominent example of human-AI convergence, examining its musical characteristics and significance. AI improvisation is more than a mere tool for human use or an extension of a composer's intent; it embodies a relationship of equal coexistence between humans and AI, enabled through real-time dialogue and interaction. This research will specifically delve into the principles and musical concepts underlying the AI improvisation works of composers such as Artemi-Maria Gioti, investigating how these works challenge and expand traditional music concepts and human-centered perspectives.</p>",
        "topics": [],
        "user": {
            "pk": 85814,
            "forum_user": {
                "id": 85712,
                "user": 85814,
                "first_name": "Yuseon",
                "last_name": "Won",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7ddecde344054df4f56fae187e46edad?s=120&d=retro",
                "biography": "Musicologist. After graduating from Ewha Womans University with degrees in Composition and Philosophy, she obtained both her Master’s and Ph.D. in Musicology from Seoul National University’s College of Music. Her research focuses on the new musical imagination emerging from the intersection of the digital and analog worlds, closely observing the evolving “new normal” in music.\nHer solo-authored book Music of New Normal: Listening to the Future through  Digital Convergence Music(2021), her edited volume The Digital Revolution and Music: The Aesthetics of YouTube, Mashups, and Artificial Intelligence(2021), and her co-authored work The Aesthetics of AI and Posthumanism in Music(2022) have all been recognized as \"Sejong Book Award for Excellence in Academic Literature\" recipients.\n  Some of her notable research includes “From Heard Music to Unheard Music: Digital Technology and Conceptual Music in the 21st Century”(2020), “Trauma and Memories Represented by Technology: Composition and Directing Strategy in Michel van der Aa’s Opera Blank Out”(2022), and “A Study on Composer LEE Donoung’s Robot Music: Focusing on dRobot”(2023).\n  Currently, she lectures at Seoul National University,",
                "date_modified": "2024-10-31T16:53:48.204149+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 985,
                        "forum_user": 85712,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "yuseon2024",
            "first_name": "Yuseon",
            "last_name": "Won",
            "bookmarks": []
        },
        "slug": "musical-interaction-between-ai-and-humans-convergence-of-ai-and-humans-in-21st-century-contemporary-music",
        "pk": 3048,
        "published": true,
        "publish_date": "2024-10-22T11:04:21+02:00"
    },
    {
        "title": "Orchidea & Electroacoustics I",
        "description": "A first attempt to use the Orchidea package to create electroacoustic material",
        "content": "<p>Hi,</p>\r\n<p>I would like to open a subject concerning the possibility to use Orchidea in electroacoustic.</p>\r\n<p>I will first try to lay out general questions about an uncertain approach and then post a link to a screen video captures of some experimentations in with the Orchidea package and any other related helpful programming module.</p>\r\n<p>Well my first question is straight forward :</p>\r\n<p>What can I do with Orchidea once I know that I am not an orchestrator ?</p>\r\n<p>To answer this question, first I need to know what Orchidea can do !</p>\r\n<p>So I began my journey into the unknown... exactly here :-) [Yan Maresz. Compositional approaches](https://medias.ircam.fr/xf749d4)</p>\r\n<p>This is what we might call an empirical approach. All these strategies, problems, responsibilities, questions ! What a wonder !</p>\r\n<p>Hold on... did you hear that at 21'55\" !? The maestro talks about the use of Orchidea in electroacoustic creation ? Oh no is it already the end of the conference ?</p>\r\n<p>Well I can tell now that it sounds promising and yes things can be done with Orchidea besides orchestration.</p>\r\n<p>But what kind of things ? Personnal Databases ? Concrete sounds ? Aimed targets ? Sound textures ? How to implement the tools for that ?</p>\r\n<p>And here comes another unbelievable set of many-folded modules and tutorials with the Orchidea package. Very well organised and documented by the developers and the cherry on the cake, we can find a very robust workspace template at the end of the <strong>Building a workspace</strong> tutorial ! :clap:</p>\r\n<p>While slowly wandering through the tutorials and the modules some insights came but for what ? Am I quite certain to find an electroacoustic creation strategy without an intended use or application i.e. an idea of what I want to do or the aim of that idea and how to reach it ?</p>\r\n<p>Well I must confess that I have a mixed piece going on, suffice to say that it is supposed to occur, one day, in an abandoned church at the periphery of 9 touristic churches and cathedrals, in a medieval city. Our aim is to make it resonate !</p>\r\n<p>So here is a first attempt to tame Orchidea. Others will hopefully follow :-)</p>\r\n<p><a href=\"https://youtu.be/VGu2fFvKYyw\" target=\"_blank\" rel=\"noopener\">Orchidea &amp; Electroacoustics I</a></p>\r\n<p>&nbsp;</p>\r\n<p>Cheers !</p>\r\n<p>nadir B.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 386,
                "name": "Composition strategies",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 297,
                "name": "Electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 385,
                "name": "Ircam media",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 384,
                "name": "Orchidea",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 26,
            "forum_user": {
                "id": 26,
                "user": 26,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Acousmatic_Miniature_1.jpg",
                "avatar_url": "/media/cache/e4/a3/e4a33a726757791da7c0210ad665a60f.jpg",
                "biography": "Membre actif du Forum Ircam et utilisateur des logiciels de l’institut où il a été formé par Alexis Baskind (Spatialisateur), Jean\nLochard (Audiosculpt), Mikhail Malt (Open Music) , Benjamin Thigpen (Max), Nicolas Misdariis (Sound Design).\nNadir Babouri is an active member of IRCAM Forum and a user of IRCAM's softwares. He studied with Alexis Baskind (Spatialisateur), Jean Lochard (Audiosculpt), Mikhail Malt (Open Music), Benjamin Thigpen (MaxMsp), Nicolas Misdariis (Sound Design) and Jean-Louis Giavitto (Antescofo Language)",
                "date_modified": "2025-04-15T10:27:19.119235+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nadir-b",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "orchidea-electroacoustics-i",
        "pk": 634,
        "published": false,
        "publish_date": "2020-04-17T16:33:25+02:00"
    },
    {
        "title": "Positioning System References for Digital Arts",
        "description": "Hi,&nbsp;",
        "content": "<p>Hi,&nbsp;</p>\r\n<p>this document is a simple archive with links to solutions for geo-tagging and localization. Please feel free to update the list in the comments...</p>\r\n<p><span style=\"font-weight: 400;\">Here is a useful Youtube webinar on 'Choosing the Right Beacon Hardware': </span><a href=\"https://www.youtube.com/watch?v=eHtDmjUj1Ck\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=eHtDmjUj1Ck </span></a></p>\r\n<p><span style=\"font-weight: 400;\">(Note at 16:40 - discussion about WiFi access Aerohive hubs with beacons installed). </span><span style=\"font-weight: 400;\">This is an interesting option as hubbs for visual content delivery are likely to be required for the project. &nbsp;They are however much more expensive than other options on this page - probably because of the WiFi hub solution. No SDK was apparent on their site: </span><a href=\"https://www.aerohive.com/products/\"><span style=\"font-weight: 400;\">https://www.aerohive.com/products/</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Currently the only beacons I can find with Unity3D integration is:</span></p>\r\n<p>- <a href=\"https://estimote.com\"><span style=\"font-weight: 400;\">https://estimote.com</span></a></p>\r\n<p>- <a href=\"https://www.navisens.com\"><span style=\"font-weight: 400;\">https://www.navisens.com</span></a><span style=\"font-weight: 400;\"> (</span><span style=\"font-weight: 400;\">IOS, Android. No infrastructure required)</span></p>\r\n<p><span style=\"font-weight: 400;\">motionDNA&trade; is available as a native SDK for iOS and Android, and as a web api which processes sensor data entirely within the browser: no app required!</span></p>\r\n<p><span style=\"font-weight: 400;\">This video shows an impressive accuracy and speed of response </span></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=1iEOvFlLiUM\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=1iEOvFlLiUM</span></a></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=auUZJz1ZTEU\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=auUZJz1ZTEU</span></a></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=J8430X2g7fE\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=J8430X2g7fE</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Kontact.io</span></p>\r\n<p><span style=\"font-weight: 400;\">Make a hub which can be used to control all beacons and to stream data</span></p>\r\n<p><a href=\"https://store.kontakt.io/next-generation/33-gateway.html\"><span style=\"font-weight: 400;\">https://store.kontakt.io/next-generation/33-gateway.html</span></a></p>\r\n<p><a href=\"https://www.oriient.me\"><span style=\"font-weight: 400;\">https://www.oriient.me</span></a></p>\r\n<p><span style=\"font-weight: 400;\">VIDEO on accuracy: </span><a href=\"https://www.youtube.com/watch?v=0HKjEPh_Pu8\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=0HKjEPh_Pu8</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">Tracking system references </span></h3>\r\n<p><span style=\"font-weight: 400;\">SLAM </span><a href=\"https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping\"><span style=\"font-weight: 400;\">https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping</span></a></p>\r\n<p><span style=\"font-weight: 400;\">RECO </span><a href=\"http://www.recho.org\"><span style=\"font-weight: 400;\">http://www.recho.org</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Sonic Maps </span><a href=\"http://sonicmaps.org/about.html\"><span style=\"font-weight: 400;\">http://sonicmaps.org/about.html</span></a></p>\r\n<p><span style=\"font-weight: 400;\">PodWalk </span><a href=\"http://podwalk.org/team/\"><span style=\"font-weight: 400;\">http://podwalk.org/team/</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">Summary Article</span></h3>\r\n<p><a href=\"https://itechcraft.com/precision-indoor-navigation/\"><span style=\"font-weight: 400;\">https://itechcraft.com/precision-indoor-navigation/</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">Using Beacons </span></h3>\r\n<p><a href=\"https://www.accuware.com/products/bluetooth-beacon-tracker/\"><span style=\"font-weight: 400;\">https://www.accuware.com/products/bluetooth-beacon-tracker/</span></a></p>\r\n<p><a href=\"https://estimote.com/products/\"><span style=\"font-weight: 400;\">https://estimote.com/products/</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">Beacons + sensors</span></h3>\r\n<p><a href=\"https://sensalytics.net/de\"><span style=\"font-weight: 400;\">https://sensalytics.net/de</span></a></p>\r\n<p><a href=\"https://navibees.com/introduction-indoor-navigation-systems/\"><span style=\"font-weight: 400;\">https://navibees.com/introduction-indoor-navigation-systems/</span></a></p>\r\n<p><a href=\"https://estimote.com\"><span style=\"font-weight: 400;\">https://estimote.com</span></a></p>\r\n<p><a href=\"https://senion.com/indoor-positioning-for-office/\"><span style=\"font-weight: 400;\">https://senion.com/indoor-positioning-for-office/</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">WIFI tracking</span></h3>\r\n<p><a href=\"http://www.gizmodo.co.uk/2017/04/exclusive-heres-what-museums-learn-by-tracking-your-phone/\"><span style=\"font-weight: 400;\">http://www.gizmodo.co.uk/2017/04/exclusive-heres-what-museums-learn-by-tracking-your-phone/</span></a></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=sOce7B2_6Sk\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=sOce7B2_6Sk</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">Other systems</span></h3>\r\n<p><span style=\"font-weight: 400;\">Insoft seem to offer solutions in all modes (WiFi, BT etc)</span></p>\r\n<p><a href=\"https://www.infsoft.com/technology/sensors/bluetooth-low-energy-beacons\"><span style=\"font-weight: 400;\">https://www.infsoft.com/technology/sensors/bluetooth-low-energy-beacons</span></a></p>\r\n<p><a href=\"https://visioglobe.com/ips-indoor-positioning-system/\"><span style=\"font-weight: 400;\">https://visioglobe.com/ips-indoor-positioning-system/</span></a></p>\r\n<p><span style=\"font-weight: 400;\">GeoMagnetic </span><a href=\"http://www.indooratlas.com\"><span style=\"font-weight: 400;\">http://www.indooratlas.com</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">Open Source - Indoor Tracking</span></h3>\r\n<p><a href=\"http://redpin.org\"><span style=\"font-weight: 400;\">http://redpin.org</span></a></p>\r\n<p><a href=\"http://www.vs.inf.ethz.ch/publ/papers/bolligph-redpin2008.pdf\"><span style=\"font-weight: 400;\">http://www.vs.inf.ethz.ch/publ/papers/bolligph-redpin2008.pdf</span></a></p>\r\n<h3><span style=\"font-weight: 400;\">Google Indoor Maps</span></h3>\r\n<p><a href=\"https://www.mapspeople.com/mapsindoors/?gclid=EAIaIQobChMI0tWfyt-B2AIVB_EbCh3cFwtrEAAYAiAAEgInKfD_BwE\"><span style=\"font-weight: 400;\">https://www.mapspeople.com/mapsindoors/?gclid=EAIaIQobChMI0tWfyt-B2AIVB_EbCh3cFwtrEAAYAiAAEgInKfD_BwE</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Infsoft </span></p>\r\n<p><a href=\"https://www.infsoft.com/technology/sensors/bluetooth-low-energy-beacons\"><span style=\"font-weight: 400;\">https://www.infsoft.com/technology/sensors/bluetooth-low-energy-beacons</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Basic overview</span></p>\r\n<p><a href=\"http://academy.pulsatehq.com/bluetooth-beacons\"><span style=\"font-weight: 400;\">http://academy.pulsatehq.com/bluetooth-beacons</span></a></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=L44m7otNI7o\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=L44m7otNI7o</span></a></p>\r\n<p><span style=\"font-weight: 400;\">open source software-hardware framework that can be used to build arbitrary configurations of inertial motion capture systems.</span></p>\r\n<p><a href=\"http://chordata.cc/\"><span style=\"font-weight: 400;\">http://chordata.cc</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Here&rsquo;s a brief video explaining how the system works</span></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=npJumH0eol0\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=npJumH0eol0</span></a></p>\r\n<p><span style=\"font-weight: 400;\">A simple live performance we made with the system (the accuracy and responsiveness can be appreciated here):</span></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=vp6J6rabenk\"><span style=\"font-weight: 400;\">https://www.youtube.com/watch?v=vp6J6rabenk</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Technical descriptions and extensive project logs at out hackaday project page:</span></p>\r\n<p><a href=\"https://hackaday.io/project/27519-motion-capture-system-that-you-can-build-yourself\"><span style=\"font-weight: 400;\">https://hackaday.io/project/27519-motion-capture-system-that-you-can-build-yourself</span></a></p>\r\n<p><span style=\"font-weight: 400;\">Code sources and KICAD projects at our repositories:</span></p>\r\n<p><a href=\"https://gitlab.com/chordata\"><span style=\"font-weight: 400;\">https://gitlab.com/chordata</span></a></p>",
        "topics": [
            {
                "id": 20,
                "name": "Beacon",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 18,
                "name": "Digital arts",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 19,
                "name": "Geolocalization",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 16,
                "name": "Gps ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 21,
                "name": "Kinect",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 17,
                "name": "Position",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 12,
            "forum_user": {
                "id": 12,
                "user": 12,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6df540490e28efb30078458b435d3b79?s=120&d=retro",
                "biography": "Greg Beller works as an artist, a researcher, a teacher and a computer designer for contemporary arts. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student on generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department and the product manager of the IRCAM Forum. Founder of the Synekine Project, he is currently doing a second PhD on “Natural Interfaces for Computer Music” at the HfMT Hamburg in the creation and the performance of artistic moments.",
                "date_modified": "2025-10-31T17:44:16.631619+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Askeladd",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": []
        },
        "slug": "positioning-system-references-for-digital-arts",
        "pk": 8,
        "published": false,
        "publish_date": "2019-02-04T15:13:14+01:00"
    },
    {
        "title": "Bonjour",
        "description": "Bonjour, je suis débutant, par où commencer ?",
        "content": "<p>Bonjour, je suis d&eacute;butant, par o&ugrave; commencer ?</p>",
        "topics": [],
        "user": {
            "pk": 93756,
            "forum_user": {
                "id": 93641,
                "user": 93756,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/391ca7f857890be66ab09e7e2b9683e2dcacb841_full.jpg",
                "avatar_url": "/media/cache/c9/d7/c9d79f241df361c17f1d7c5f61377de5.jpg",
                "biography": "Sans doute existe-t-il de nombreux sites qui peuvent aider même les personnes de plus de 50 ans à trouver l'âme sœur, car nous méritons tous d'être aimés, cite de rencontre, ici vous en apprendrez plus sur les principes de recherche que propose ce site, c'est la meilleure option pour repartir sur de nouvelles bases.",
                "date_modified": "2025-07-06T00:51:38.233483+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jatin9",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "bonjour",
        "pk": 3531,
        "published": false,
        "publish_date": "2025-07-05T23:21:09.288187+02:00"
    },
    {
        "title": "Embracing the unrepeatable : non-idiomatic manipulation by Eleonora Podestà, Thomas De Santi, Erica Vincenti",
        "description": "Embracing the Unrepeatable is a project conceived by Eleonora Podestà in collaboration with Thomas De Santi and Erica Vincenti. The main output of this project is a musical performance that explores the essence of co-agency. The fundamental idea is to explore musical improvisation as a shared act, one that materializes not solely through performer’s real time dialogue with Somax2, but through the union of that interaction with the direct, and creative intervention of the audience.",
        "content": "<p style=\"text-align: justify;\"><strong><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></strong></p>\r\n<p style=\"text-align: justify;\"><strong>CONCEPT</strong></p>\r\n<p style=\"text-align: justify;\"><em>Embracing the unrepeatable</em> is a project that aims at the realisation of a collective performance. The main purpose of this project is to turn members of the audience into an active part of the musical performance, which features a continuous dialogue between the violin and <em>Somax2</em>. The performance is intended to explore the implications of a triadic co-agency model that integrates the performer, interactive generative systems and the audience, and blends them into the same framework.&nbsp;</p>\r\n<p style=\"text-align: justify;\">The crucial element of this project is undoubtedly audience participation. Instead of passive listening, the audience members are called to interact dynamically, ensuring their contribution defines and shifts the course of the musical event. Each participant's choice influences the final result, allowing the performance to become an unpredictable and unrepeatable musical product.</p>\r\n<p style=\"text-align: justify;\"><strong>STRUCTURE</strong></p>\r\n<p style=\"text-align: justify;\">In <em>Embracing the Unrepeatable</em>, each audience member will have the freedom to choose whether or not to actively participate in the performance; a MIDI controller will be provided (<em>Behringer X-Touch Compact</em>), allowing them to manipulate the timbre of the violin and the texture of the sound in real time, and to contribute to the creative process alongside the performer and <em>Somax2</em>.</p>\r\n<p style=\"text-align: justify;\">Any action performed by the audience will be perceived by the performer, who will then adapt their improvisation based on the dual input coming from both the audience and Somax2. The violinist will play without prior knowledge of the specific mappings or device settings, as the configurations remain hidden. This creates a feedback loop of mutual uncertainty.</p>\r\n<p style=\"text-align: justify;\"><strong>IMPLICATIONS</strong></p>\r\n<p style=\"text-align: justify;\">This context challenges the traditional conventions of concert music. By aiming to dissolve the boundary between audience, performer and interactive generative systems, <em>Embracing the Unrepeatable</em> becomes the direct result of their collaboration.</p>\r\n<p style=\"text-align: justify;\">The project also aims to highlight the immanent extemporaneity of improvisation. It transforms the performative act into a shared experience where every choice is born and developed in the present moment. Improvisation, thus, becomes a space of creative and interactive freedom, reaching new sonic frontiers through a constant interplay of interpretations and reactions.</p>\r\n<p style=\"text-align: justify;\"><strong>CONCLUSION</strong></p>\r\n<p style=\"text-align: justify;\">Ultimately, <em>Embracing the Unrepeatable</em> defines the musical event as an ecosystem. Based on a triadic co-agency model, the project shifts the focus from a single creator to a distributed creativity where the performer, the audience, and <em>Somax2</em> are equally essential. The outcome is a performance model that values the unexpected and celebrates collective contribution.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>",
        "topics": [
            {
                "id": 4289,
                "name": "Co-agency",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2638,
                "name": "collectivenss ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1875,
                "name": "violin",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 108001,
            "forum_user": {
                "id": 107866,
                "user": 108001,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PHOTO-2025-07-05-21-39-35.jpg",
                "avatar_url": "/media/cache/24/79/2479595baeb5830c15209e5b5f40e642.jpg",
                "biography": "Eleonora Sofia Podestà (2004) earned her Bachelor (2022) and Master (2024) degree in violin performance with honors at Conservatorio “G. Puccini” in La Spezia, under the guidance of Duccio Ceccanti. She also followed advanced training courses in chamber music in Scuola di Musica di Fiesole and Accademia di Musica di Pinerolo. She performs across Europe, both solo and in chamber ensembles. Her passion for new music began with LabMusCont and led her to join GAMO ensemble (Gruppo Aperto Musica Oggi). In 2024 she won a 3-year AFAM doctoral scholarship focused on contemporary violin performance, under the supervision of Alberto Gatti. In 2025 she attended a workshop with Ensemble Intercontemporain, held in Accademia \"W. Stauffer\" in Cremona, where she had the chance to work with Jeanne Marie Conquer and Diego Tosi. The focal point of her current research is the role of the performer in contemporary music, exploring elements such as interaction between performer and new technologies, extended violin techniques and improvisation.",
                "date_modified": "2026-03-02T11:53:48.560438+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "eleonorasofiapodesta",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3620,
                    "user": 108001,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4400,
                    "user": 108001,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "embracing-the-unrepeatable-non-idiomatic-manipulation",
        "pk": 4400,
        "published": true,
        "publish_date": "2026-02-20T00:45:28+01:00"
    },
    {
        "title": "Deriving Synchrony : Système interactif en temps réel de traduction des ondes cérébrales en musique - Johnny Tomasiello",
        "description": "Deriving Synchrony : A Real Time Interactive Brainwave-to-Music Translation Performance System est une œuvre immersive dont l'objectif est d'explorer la relation réciproque entre l'activité électrique du cerveau et les stimuli externes qui ont été générés et définis par ces mêmes événements physiologiques, grâce à l'utilisation d'une interface cerveau-ordinateur-musique (BCMI) qui permet la sonification des données capturées par un électroencéphalogramme, lesquelles sont traduites en stimuli musicaux en temps réel.\r\n\r\nIl s'agit d'un système interactif de composition assistée par ordinateur qui traduit l'activité électrique du cerveau en compositions musicales en temps réel, tout en permettant à l'utilisateur d'exercer un contrôle conscient sur ce processus de traduction et sur la production sonore générative qui en résulte dans un cadre musical. Il peut également enseigner aux participants comment modifier positivement leur propre physiologie en apprenant à influencer les fonctions du système nerveux autonome par le biais d'un retour d'information neurologique et bidirectionnel.",
        "content": "<p><strong><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></strong></p>\r\n<p>Pr&eacute;sent&eacute; par : Johnny Tomasiello <br /><a href=\"https://forum.ircam.fr/profile/johnnytomasiello/\">Biographie</a></p>\r\n<p></p>\r\n<p><strong>R&eacute;sum&eacute; :</strong></p>\r\n<p><strong>Deriving Synchrony : A Real Time Interactive Brainwave-to-Music Translation Performance System</strong> est une &oelig;uvre immersive dont le but est d'explorer la relation r&eacute;ciproque entre l'activit&eacute; &eacute;lectrique du cerveau et les stimuli externes qui ont &eacute;t&eacute; g&eacute;n&eacute;r&eacute;s et d&eacute;finis par ces m&ecirc;mes &eacute;v&eacute;nements physiologiques, gr&acirc;ce &agrave; l'utilisation d'une interface cerveau-ordinateur-musique (BCMI) qui permet la sonification des donn&eacute;es captur&eacute;es par un &eacute;lectroenc&eacute;phalogramme, qui sont traduites en stimuli musicaux en temps r&eacute;el.</p>\r\n<p>Il s'agit d'un syst&egrave;me interactif de composition assist&eacute;e par ordinateur qui traduit l'activit&eacute; &eacute;lectrique du cerveau en compositions musicales en temps r&eacute;el, tout en permettant &agrave; l'utilisateur d'exercer un contr&ocirc;le conscient sur ce processus de traduction et sur la production sonore g&eacute;n&eacute;rative qui en r&eacute;sulte dans un cadre musical. Il peut &eacute;galement enseigner aux participants comment modifier positivement leur propre physiologie en apprenant &agrave; influencer les fonctions du syst&egrave;me nerveux autonome par le biais d'un retour d'information neurologique et bidirectionnel.</p>\r\n<p><strong>Introduction :</strong></p>\r\n<p>Les donn&eacute;es de l'EEG sur les ondes c&eacute;r&eacute;brales ont permis de classer avec succ&egrave;s les &eacute;tats mentaux [1], qui affectent la \"modulation autonome du syst&egrave;me cardio-vasculaire\" [2], et il existe des &eacute;tudes sur la fa&ccedil;on dont la musique peut influencer une r&eacute;ponse du syst&egrave;me nerveux autonome [3]. [C'est en ayant ces ph&eacute;nom&egrave;nes &agrave; l'esprit que ce travail a &eacute;t&eacute; con&ccedil;u et d&eacute;velopp&eacute;.</p>\r\n<p>Les changements dans la zone alpha sont l'objet principal de ce projet, puisque la recherche a montr&eacute; que la stimulation de l'activit&eacute; dans la zone alpha entra&icirc;ne une relaxation musculaire, une r&eacute;duction de la douleur, une r&eacute;gulation du rythme respiratoire et une diminution de la fr&eacute;quence cardiaque [4] [5] [6]. [Il a &eacute;galement &eacute;t&eacute; utilis&eacute; pour r&eacute;duire le stress, l'anxi&eacute;t&eacute; et la d&eacute;pression, et peut favoriser l'am&eacute;lioration de la m&eacute;moire et des performances mentales, et contribuer au traitement des l&eacute;sions c&eacute;r&eacute;brales.</p>\r\n<p>Mes recherches pr&eacute;c&eacute;dentes sur ce sujet ont mis l'accent sur l'exploration et la quantification des effets neurologiques de la modulation des ondes c&eacute;r&eacute;brales et des processus physiologiques correspondants par le biais d'un feedback neuro- et bidirectionnel comme forme de th&eacute;rapie et d'entra&icirc;nement aux ondes alpha, o&ugrave; la musique g&eacute;n&eacute;rative sert de syst&egrave;me de neurofeedback en temps r&eacute;el qui se comporte de mani&egrave;re &agrave; encourager des r&eacute;ponses optimales des ondes c&eacute;r&eacute;brales, en particulier dans la zone alpha.</p>\r\n<p>J'ai mis &agrave; profit mon exp&eacute;rience dans cette recherche et les donn&eacute;es que j'ai recueillies pour proposer cette nouvelle it&eacute;ration. Ce syst&egrave;me actuel permet &agrave; l'utilisateur un contr&ocirc;le plus actif de la performance et de la musicalit&eacute; du feedback g&eacute;n&eacute;ratif, r&eacute;sultant en un syst&egrave;me de composition et de performance dont le comportement est d&eacute;fini de mani&egrave;re plus litt&eacute;rale par les intentions de l'utilisateur, tout en continuant &agrave; fonctionner comme un syst&egrave;me de neurofeedback r&eacute;ciproque.</p>\r\n<p>Ce qui est explor&eacute; ici est donc &eacute;largi pour inclure la mani&egrave;re dont l'ajout du comportement orient&eacute; vers un but d'un paradigme de changement de t&acirc;che affecte la flexibilit&eacute; cognitive [7], et la quantit&eacute; de traitement simultan&eacute; conscient n&eacute;cessaire pour qu'un sujet affecte le feedback musical qui en r&eacute;sulte. Cette dimension suppl&eacute;mentaire prend en compte les calculs neuronaux qui d&eacute;terminent comment les neurones prennent des d&eacute;cisions d'allumage, ce qui d&eacute;termine directement l'activit&eacute; des ondes c&eacute;r&eacute;brales qui est mesur&eacute;e.</p>\r\n<p>La proc&eacute;dure, lors de l'utilisation de ce travail pour l'exploration des effets physiologiques du feedback neuro- et bidirectionnel, commence par l'obtention et la comparaison de deux ensembles de donn&eacute;es : un ensemble de contr&ocirc;le et un ensemble th&eacute;rapeutique. L'ensemble de contr&ocirc;le suit les donn&eacute;es EEG sans utiliser le feedback musical, tandis que l'ensemble th&eacute;rapeutique enregistre les donn&eacute;es avec le feedback.</p>\r\n<p>La m&eacute;thodologie de recherche explore la mani&egrave;re de collecter et de quantifier les donn&eacute;es physiologiques par le biais de la neuro-imagerie non invasive, en utilisant efficacement les ondes c&eacute;r&eacute;brales du sujet pour produire des compositions et des paysages sonores interactifs en temps r&eacute;el qui, exp&eacute;riment&eacute;s simultan&eacute;ment par le sujet, ont la capacit&eacute; d'alt&eacute;rer ses r&eacute;ponses physiologiques.</p>\r\n<p>Le contenu m&eacute;lodique et rythmique est d&eacute;riv&eacute; et constamment influenc&eacute; par les relev&eacute;s EEG du sujet. Un sujet qui se concentre sur le retour d'information qui en r&eacute;sulte peut tenter de provoquer un changement dans ses syst&egrave;mes physiologiques, avec le double objectif d'affecter directement la performance musicale et d'obtenir une r&eacute;ponse optimale en alpha.</p>\r\n<p>Les r&eacute;ponses physiologiques qui en r&eacute;sultent sont enregistr&eacute;es et mesur&eacute;es afin de d&eacute;terminer l'efficacit&eacute; de l'utilisation de stimuli externes pour affecter le corps humain sur les plans physiologique et psychologique.</p>\r\n<p>Outre l'&eacute;tude de ces questions neuroscientifiques, ce projet vise &agrave; explorer la validit&eacute; de l'utilisation de la m&eacute;thode scientifique dans le cadre d'un processus artistique. La m&eacute;thodologie consistera &agrave; cr&eacute;er un syst&egrave;me bas&eacute; sur des preuves afin de d&eacute;velopper des projets bas&eacute;s sur la recherche.</p>\r\n<p>Comme l'a dit Gita Sarabhai &agrave; John Cage, \"la musique conditionne l'esprit, ce qui conduit &agrave; des moments de la vie qui sont complets et combl&eacute;s\" [8]. [8]. Dans ce cas, la musique peut &eacute;galement &ecirc;tre utilis&eacute;e par l'esprit pour conditionner le corps.</p>\r\n<p></p>\r\n<p><strong>Informations sur l'EEG :</strong></p>\r\n<p>Un &eacute;lectroenc&eacute;phalogramme (&eacute;galement appel&eacute; EEG) est une m&eacute;thode de surveillance &eacute;lectrophysiologique utilis&eacute;e pour enregistrer l'activit&eacute; &eacute;lectrique du cerveau. Le signal EEG typique d'un adulte humain se situe entre 10 et 100 &micro;V (microvolts) d'amplitude lorsqu'il est mesur&eacute; sur le cuir chevelu. L'EEG a &eacute;t&eacute; invent&eacute; par le psychiatre allemand Hans Berger en 1929 et les recherches sur l'interpr&eacute;tation et la modulation des ondes c&eacute;r&eacute;brales ont commenc&eacute; peu de temps apr&egrave;s. L'EEG permet de mesurer directement l'activit&eacute; neuronale et de saisir les processus cognitifs en temps r&eacute;el. Berger a prouv&eacute; que les ondes alpha (&eacute;galement connues sous le nom d'ondes de Berger) &eacute;taient g&eacute;n&eacute;r&eacute;es par les neurones du cortex c&eacute;r&eacute;bral.</p>\r\n<p>En 1934, les physiologistes anglais Edgar Adrian et Brain Matthews ont d&eacute;crit pour la premi&egrave;re fois la sonification des ondes alpha d&eacute;riv&eacute;es des donn&eacute;es EEG[9]. [Ils ont constat&eacute; que \"les activit&eacute;s non visuelles qui exigent toute l'attention (par exemple le calcul mental) abolissent les ondes ; les stimulations sensorielles qui exigent de l'attention le font &eacute;galement\" [10], ce qui montre comment la concentration et les processus de pens&eacute;e affectent l'activit&eacute; dans la gamme de fr&eacute;quences des ondes alpha.</p>\r\n<p>L'activit&eacute; c&eacute;r&eacute;brale enregistr&eacute;e dans un EEG est la somme des potentiels post-synaptiques inhibiteurs et excitateurs qui se produisent &agrave; travers une membrane neuronale[11]. [11]</p>\r\n<p>Les mesures sont effectu&eacute;es &agrave; l'aide d'&eacute;lectrodes plac&eacute;es sur le cuir chevelu. Les mesures sont divis&eacute;es en cinq bandes de fr&eacute;quence, d&eacute;limitant les ondes lentes, mod&eacute;r&eacute;es et rapides. Les bandes, de la plus lente &agrave; la plus rapide, sont les suivantes :</p>\r\n<p><strong>Delta</strong>, avec une gamme d'environ 1,0Hz-4,0Hz, qui signifie la m&eacute;ditation la plus profonde ou le sommeil sans r&ecirc;ve.</p>\r\n<p><strong>Th&ecirc;ta</strong>, d'environ 4 Hz &agrave; 8 Hz, signifiant la m&eacute;ditation ou le sommeil profond.</p>\r\n<p><strong>Alpha</strong>, d'environ 7,5 Hz &agrave; 13 Hz, repr&eacute;sentant des pens&eacute;es qui s'&eacute;coulent tranquillement.</p>\r\n<p><strong>B&ecirc;ta</strong>, d'environ 13 Hz &agrave; 30 Hz, qui correspond &agrave; un &eacute;tat d'&eacute;veil normal.</p>\r\n<p>Et</p>\r\n<p><strong>Gamma</strong>, d'environ 30 Hz &agrave; 44 Hz, qui est le plus actif pendant le traitement simultan&eacute; d'informations qui mobilisent plusieurs zones diff&eacute;rentes du cerveau.</p>\r\n<p></p>\r\n<p><strong>Histoire de l'utilisation de l'EEG dans la musique :&nbsp;</strong></p>\r\n<p>Le physicien Edmond Dewan a commenc&eacute; &agrave; &eacute;tudier les ondes c&eacute;r&eacute;brales au d&eacute;but des ann&eacute;es 1960 et a mis au point un \"syst&egrave;me de controle des ondes c&eacute;r&eacute;brales\". Ce syst&egrave;me d&eacute;tectait les changements dans les rythmes alpha et permettait d'allumer ou d'&eacute;teindre l'&eacute;clairage. La lumi&egrave;re pouvait &eacute;galement &ecirc;tre remplac&eacute;e par \"un dispositif sonore qui &eacute;mettait un bip lorsqu'il &eacute;tait allum&eacute;\", ce qui permettait &agrave; Dewan d'&eacute;peler la phrase \"JE peux parler\" en mode Morse&nbsp;[9]. [Par la suite, Dewan a rencontr&eacute; le compositeur exp&eacute;rimental Alvin Lucier, ce qui lui a inspir&eacute; la premi&egrave;re composition sur les ondes c&eacute;r&eacute;brales.</p>\r\n<p>Alvin Lucier a cr&eacute;&eacute; Music for Solo Performer en 1965. Le compositeur &eacute;tait assis sur une chaise sur sc&egrave;ne, les yeux ferm&eacute;s, et ses ondes c&eacute;r&eacute;brales &eacute;taient enregistr&eacute;es. Les donn&eacute;es de l'enregistrement &eacute;taient amplifi&eacute;es et distribu&eacute;es &agrave; des hauts)parleurs &eacute;taient plac&eacute;s contre diff&eacute;rents types d'instruments de percussion, de sorte que la vibration des hauts-parleurs provoquait le son de l'instrument.&nbsp;<br /><br />Lucier a pu contr&ocirc;ler les &eacute;v&eacute;nements de percussion en ma&icirc;trisant ses fonctions cognitives, et a constat&eacute; qu'une interruption de la concentration perturbait ce contr&ocirc;le. Bien que la ma&icirc;trise du rythme alpha ait &eacute;t&eacute; (et soit toujours) difficile, Music for Solo Performer a grandement contribu&eacute; au domaine de la musique exp&eacute;rimentale et a illustr&eacute; la profondeur des possibilit&eacute;s d'utilisation du contr&ocirc;le de l'EEG sur la performance musicale.&nbsp;<br /><br /></p>\r\n<p>L'informaticien Jacques J. Vidal a publi&eacute; en 1973 l'article Toward Direct Brain-Computer Communication, qui proposait pour la premi&egrave;re fois l'interface cerveau-ordinateur (ICCO), c'est-&agrave;-dire un moyen d'utiliser le cerveau pour contr&ocirc;ler des appareils externes.</p>\r\n<p>Ce fut le tout d&eacute;but de la recherche sur l'interface cerveau-ordinateur-musique (BCMI), qui a &eacute;volu&eacute; pour devenir un domaine d'&eacute;tude interdisciplinaire \"au carrefour de la musique, de la science et de l'ing&eacute;nierie biom&eacute;dicale\" [12]. Les BCMI (&eacute;galement appel&eacute;es interfaces cerveau-machine ou IMC) sont encore utilis&eacute;es aujourd'hui, et le domaine de recherche qui les entoure n'en est qu'&agrave; ses d&eacute;buts.</p>\r\n<p>Paul Lehrer, sous la direction duquel j'ai &eacute;tudi&eacute; &agrave; l'UMDNJ, a contribu&eacute; de mani&egrave;re significative &agrave; la recherche dans le domaine de la psychophysique entre les ann&eacute;es 1990 et aujourd'hui, avec des &eacute;tudes sur le biofeedback et les techniques de gestion du stress. Le DR Lehrer a &eacute;galement &eacute;tabli des normes pour les th&eacute;rapies musicales et leur utilisation en tant que techniques de relaxation et leurs effets physiologiques b&eacute;n&eacute;fiques en testant les avantages chez les sujets souffrant d'asthme.&nbsp;L'un de ses r&eacute;cents articles de recherche datant de 2014, intitul&eacute; Heart Rate Variability Biofeedback : How and Why Does it Work ? [14] a &eacute;tudi&eacute; l'efficacit&eacute; du biofeedback de la variabilit&eacute; de la fr&eacute;quence cardiaque (HRVB) en tant que traitement pour une vari&eacute;t&eacute; de troubles, ainsi que son utilisation pour l'am&eacute;lioration des performances.</p>\r\n<p></p>\r\n<p><strong>Aper&ccedil;u du projet :</strong></p>\r\n<p><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/462279dfab3f9eb62901ad9f3e06480e.png\" /></strong></p>\r\n<p>Ce projet enregistre les signaux EEG d'un sujet &agrave; l'aide de quatre &eacute;lectrodes extra-cr&acirc;niennes s&egrave;ches non invasives d'un bandeau EEG MUSE disponible dans le commerce. Les mesures sont enregistr&eacute;es &agrave; partir des &eacute;lectrodes TP9, AF7, AF8 et TP10, comme sp&eacute;cifi&eacute; par le syst&egrave;me de placement EEG standard international, et les donn&eacute;es sont converties en puissances de bande absolues, sur la base du logarithme de la densit&eacute; spectrale de puissance (DSP) des donn&eacute;es EEG pour chaque canal. Les donn&eacute;es relatives &agrave; la fr&eacute;quence cardiaque sont obtenues par des mesures PPG. Les mesures EEG sont enregistr&eacute;es en Bels/Db pour d&eacute;terminer la DSP dans chacune des gammes de fr&eacute;quences.</p>\r\n<p>Les relev&eacute;s EEG sont traduits en musique en temps r&eacute;el. La base temporelle des &eacute;v&eacute;nements musicaux peut &ecirc;tre variable (et/ou bas&eacute;e sur les donn&eacute;es des ondes c&eacute;r&eacute;brales), ou contrainte par une horloge r&eacute;guli&egrave;re. Le choix des gammes, des modes et des accords utilis&eacute;s, ainsi que les rythmes et les caract&eacute;ristiques de la performance, doivent &ecirc;tre soigneusement &eacute;tudi&eacute;s au pr&eacute;alable afin que l'extraction d'un ensemble fini de param&egrave;tres de l'ensemble des donn&eacute;es EEG puisse &ecirc;tre analys&eacute;e et utilis&eacute;e pour produire un morceau de musique dynamique et bien form&eacute;.</p>\r\n<p>Ce syst&egrave;me comporte trois sections principales :</p>\r\n<p>1:&nbsp;La section de<strong> capture des donn&eacute;es EEG</strong>.</p>\r\n<p>2: La section de <strong>conversion des donn&eacute;es EEG.</strong></p>\r\n<p>3: La section <strong>G&eacute;n&eacute;ration de sons et DSP</strong>.</p>\r\n<p>La (1) section de <strong>capture des donn&eacute;es EEG</strong> re&ccedil;oit les donn&eacute;es EEG du casque Muse, qui sont converties en donn&eacute;es OSC et transmises par WiFi via l'application iOS Mind Monitor.&nbsp; Ces donn&eacute;es sont ensuite divis&eacute;es en cinq bandes de fr&eacute;quences d'ondes c&eacute;r&eacute;brales distinctes : delta, th&ecirc;ta, alpha, b&ecirc;ta et gamma.&nbsp; D'autres donn&eacute;es sont &eacute;galement saisies, notamment celles de l'acc&eacute;l&eacute;rom&egrave;tre, du gyroscope, du clignement des yeux et de la crispation de la m&acirc;choire, afin de contr&ocirc;ler tout artefact dans la saisie des donn&eacute;es.&nbsp; Les donn&eacute;es de connexion du capteur sont utilis&eacute;es pour visualiser l'int&eacute;grit&eacute; de la fixation du capteur au sujet. Les donn&eacute;es PPG sont &eacute;galement saisies pour &ecirc;tre utilis&eacute;es dans une future it&eacute;ration du projet.</p>\r\n<p>La (2) section de <strong>conversion des donn&eacute;es EEG</strong> accepte les donn&eacute;es de la bande passante EEG repr&eacute;sentant des potentiels li&eacute;s &agrave; des &eacute;v&eacute;nements sp&eacute;cifiques, qui sont ensuite traduits en &eacute;v&eacute;nements musicaux.</p>\r\n<p><strong>Cette section est compos&eacute;e de trois sous-sections</strong> qui formatent leurs donn&eacute;es de sortie diff&eacute;remment, selon le cas d'utilisation :</p>\r\n<p>1. <strong>G&eacute;n&eacute;ration interne de sons et DSP</strong></p>\r\n<p>Il est destin&eacute; &agrave; &ecirc;tre utilis&eacute; enti&egrave;rement dans l'environnement Max, o&ugrave; la capture de donn&eacute;es est convertie en &eacute;v&eacute;nements musicaux et sonifi&eacute;e &agrave; l'aide d'une synth&egrave;se et d'effets construits directement dans Max.</p>\r\n<p>2. <strong>MIDI externe</strong></p>\r\n<p>Il est utilis&eacute; avec du mat&eacute;riel ou des logiciels &eacute;quip&eacute;s d'un syst&egrave;me MIDI,</p>\r\n<p>and</p>\r\n<p>3.<span><strong> Fr&eacute;quence externe et gate</strong>, pour utilisation avec des synth&eacute;tiseurs modulaires.</span></p>\r\n<p>Chacun de ces &eacute;l&eacute;ments peut &ecirc;tre utilis&eacute; s&eacute;par&eacute;ment ou simultan&eacute;ment, en fonction des besoins de la pi&egrave;ce.</p>\r\n<p>Tout d'abord, les limites sup&eacute;rieures et inf&eacute;rieures des relev&eacute;s d'ondes c&eacute;r&eacute;brales sont relev&eacute;es &agrave; des fins d'&eacute;talonnage, et des seuils significatifs sont d&eacute;finis pour chaque largeur de bande de fr&eacute;quence d'ondes c&eacute;r&eacute;brales.&nbsp; Ces seuils sont choisis sur la base des mesures EEG moyennes et optimales effectu&eacute;es avant la g&eacute;n&eacute;ration du feedback musical. Lorsque ces seuils sont atteints ou d&eacute;pass&eacute;s, un &eacute;v&eacute;nement est d&eacute;clench&eacute;.&nbsp; En fonction des mappages, ces &eacute;v&eacute;nements peuvent &ecirc;tre un ou plusieurs types d'op&eacute;rations : le son d'une note, un changement de hauteur, de gamme ou de mode, des valeurs et des temps de notes, et/ou d'autres caract&eacute;ristiques de performance g&eacute;n&eacute;ratives, telles qu'un changement de timbre.</p>\r\n<p>Pour la conversion des donn&eacute;es, les potentiels li&eacute;s &agrave; l'&eacute;v&eacute;nement sont cartographi&eacute;s de la mani&egrave;re suivante :</p>\r\n<p>Les variations de <strong>l'alpha</strong>, par rapport au seuil pr&eacute;d&eacute;fini, r&eacute;gissent la hauteur du son.&nbsp;</p>\r\n<p>Les changements de <strong>th&ecirc;ta</strong>, par rapport au seuil, influencent la synchronisation et le rythme des notes, ainsi que le d&eacute;clenchement des notes/la densit&eacute; des notes (par rapport aux valeurs de <strong>b&ecirc;ta</strong>).</p>\r\n<p>Les changements de <strong>b&ecirc;ta</strong>, par rapport au seuil, influencent l'&eacute;chelle et la transposition.</p>\r\n<p>Les variations du <strong>delta</strong>, par rapport au seuil, influencent les qualit&eacute;s spatiales telles que la r&eacute;verb&eacute;ration et le retard.</p>\r\n<p>Les changements de <strong>gamma</strong> par rapport au seuil influencent le timbre.</p>\r\n<p>Chacun de ces mappages ou de ces seuils peut &ecirc;tre facilement modifi&eacute; pour s'adapter &agrave; une autre th&egrave;se ou &agrave; un autre ensemble de normes.</p>\r\n<p>La troisi&egrave;me section est&nbsp;(3)&nbsp;<strong>G&eacute;n&eacute;ration de sons et DSP</strong>. It is responsible for the sonification of the data translated from the&nbsp;<strong>EEG data conversion</strong>&nbsp;section. This section includes synthesis models, timbral characteristics, and spatial effects.</p>\r\n<p>La version <strong>G&eacute;n&eacute;ration de sons internes et DSP</strong> de ce projet utilise trois voix de synth&egrave;se cr&eacute;&eacute;es dans Max pour le feedback musical g&eacute;n&eacute;ratif.&nbsp;Il y a deux voix soustractives qui utilisent chacune un m&eacute;lange d'ondes sinuso&iuml;dales, de dents de scie et d'ondes triangulaires, et une voix fm.&nbsp;</p>\r\n<p>Les effets timbriques utilis&eacute;s sont le m&eacute;lange de formes d'ondes, la modulation de fr&eacute;quence et les filtres passe-haut, passe-bande et passe-bas. Les effets spatiaux utilis&eacute;s sont la r&eacute;verb&eacute;ration et le retard.&nbsp; Outre les r&eacute;glages initiaux des voix, chacun des effets timbriques et spatiaux est modul&eacute; par des donn&eacute;es distinctes de potentiel li&eacute; &agrave; l'&eacute;v&eacute;nement captur&eacute;es par l'EEG.</p>\r\n<p></p>\r\n<p><strong>Conclusions :</strong></p>\r\n<p>Ce projet est une interpr&eacute;tation contemporaine d'une id&eacute;e qui m'int&eacute;resse depuis de nombreuses ann&eacute;es et qui a commenc&eacute; par des recherches sur le biofeedback EKG bidirectionnel.</p>\r\n<p>J'ai commenc&eacute; par &eacute;tudier le biofeedback bidirectionnel de l'&eacute;lectrocardiogramme. Ma premi&egrave;re exp&eacute;rience en la mati&egrave;re a eu lieu lors d'un dipl&ocirc;me universitaire en psychophysique &agrave; l'universit&eacute; Rutgers (financ&eacute; par l'universit&eacute; de m&eacute;decine et de dentisterie du New Jersey). &Agrave; l'UMDNJ, j'ai eu le privil&egrave;ge de travailler directement avec certains des m&eacute;decins qui &eacute;taient &agrave; la pointe de la recherche psychophysiologique et dont les travaux visaient &agrave; r&eacute;duire le stress chez les sujets asthmatiques afin de diminuer la fr&eacute;quence des crises[13].&nbsp;</p>\r\n<p>&Agrave; l'&eacute;poque, la technologie n&eacute;cessaire pour explorer cette id&eacute;e &eacute;tait d'une taille consid&eacute;rable et d'un co&ucirc;t prohibitif, sauf &agrave; des fins m&eacute;dicales ou acad&eacute;miques officiellement financ&eacute;es. Avec la disponibilit&eacute; actuelle d'appareils d'&eacute;lectroenc&eacute;phalographie (EEG) et de moniteurs de fr&eacute;quence cardiaque bon march&eacute;, la possibilit&eacute; d'explorer ces concepts de mani&egrave;re autonome est devenue une r&eacute;alit&eacute;.</p>\r\n<p>Bien que ce projet s'int&eacute;resse principalement aux changements dans la gamme de fr&eacute;quences des ondes c&eacute;r&eacute;brales alpha de l'EEG, des changements dans d'autres gammes de fr&eacute;quences sont utilis&eacute;s pour d&eacute;clencher des &eacute;v&eacute;nements dans le feedback. Cette approche a &eacute;t&eacute; adopt&eacute;e pour garantir que la perte de concentration d'un sujet (et/ou une baisse de la densit&eacute; spectrale de puissance de l'alpha) n'affecte pas n&eacute;gativement la g&eacute;n&eacute;ration d'un nouveau feedback musical. Avec l'aide d'un feedback coh&eacute;rent, le sujet serait capable de se concentrer &agrave; nouveau et de continuer. En fonction de l'&eacute;tat de relaxation du sujet (et de la DSP des quatre autres gammes de fr&eacute;quences EEG mesur&eacute;es), la performance et le phras&eacute; du feedback musical changeront de mani&egrave;re &agrave; encourager une plus grande concentration.</p>\r\n<p>Pour les premiers essais de validation du concept, j'ai test&eacute; un petit &eacute;chantillon de sujets. Les donn&eacute;es pr&eacute;liminaires montrent que les lectures alpha &eacute;taient plus &eacute;lev&eacute;es, en moyenne, pendant la phase th&eacute;rapeutique. De m&ecirc;me, la valeur maximale globale a &eacute;t&eacute; plus &eacute;lev&eacute;e pendant la phase th&eacute;rapeutique. Cela sugg&egrave;re que ce mod&egrave;le de r&eacute;troaction est un moyen efficace d'augmenter l'activit&eacute; dans la gamme de fr&eacute;quences des ondes c&eacute;r&eacute;brales alpha, ce qui est l'effet physiologique et psychologique b&eacute;n&eacute;fique que j'esp&eacute;rais trouver, bien que beaucoup plus de donn&eacute;es doivent &ecirc;tre collect&eacute;es avant que des conclusions d&eacute;finitives puissent &ecirc;tre tir&eacute;es.</p>\r\n<p></p>\r\n<p>La conception modulaire du travail permet d'inclure ou d'exclure presque n'importe quelle variable, ce qui sera n&eacute;cessaire pour faire avancer la recherche, afin de tester plus en profondeur les &eacute;l&eacute;ments fondamentaux de la th&egrave;se, ainsi que toute exploration et analyse musicologique que la d&eacute;finition de la r&eacute;troaction soul&egrave;ve.</p>\r\n<p>Entre-temps, outre la recherche et la collecte de donn&eacute;es, j'utilise le logiciel comme syst&egrave;me de composition pour cr&eacute;er des &oelig;uvres enregistr&eacute;es et des bandes sonores en direct. Je pr&eacute;vois &eacute;galement de monter le projet sous la forme d'une installation interactive en direct.</p>\r\n<p></p>\r\n<p><strong>Coordonn&eacute;es du contact :</strong></p>\r\n<p>Johnny Tomasiello<br /><a href=\"https://johnnytomasiello.com/\">https://johnnytomasiello.com/</a></p>\r\n<p><a href=\"mailto:johnnytomasiello@gmail.com\">johnnytomasiello@gmail.com</a></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Cr&eacute;dits et remerciements :</strong></p>\r\n<p><strong> </strong></p>\r\n<p>IRCAM</p>\r\n<p>Cycling &rsquo;74</p>\r\n<p>Dr. Paul M. Lehrer and Dr. Richard Carr</p>\r\n<p>InteraXon Muse electroencephalography headband&nbsp;</p>\r\n<p>James Clutterbuck (Mind Monitor developer)</p>\r\n<p>Carol Parkinson, Executive Director of Harvestworks</p>\r\n<p>Melody Loveless, NYU &amp; Max certified trainer</p>\r\n<p></p>\r\n<p><strong>R&eacute;f&eacute;rences :</strong></p>\r\n<p><strong></strong></p>\r\n<p>[1] J. J. Bird, A. Ekart, C. D. Buckingham, D. R. Faria. &ldquo;Mental Emotional Sentiment Classification with an EEG-based Brain-Machine Interface&rdquo;, International Conference on Digital Image &amp; Signal Processing (DISP&rsquo;19), Oxford, UK (2019)</p>\r\n<p><a href=\"http://jordanjamesbird.com/publications/Mental-Emotional-Sentiment-Classification-with-an-EEG-based-Brain-machine-Interface.pdf\">http://jordanjamesbird.com/publications/Mental-Emotional-Sentiment-Classification-with-an-EEG-based-Brain-machine-Interface.pdf</a>&nbsp;</p>\r\n<p>[2] K. Madden and G.K. Savard. &ldquo;Effects of Mental State on Heart Rate and Blood Pressure Variability in Men and Women&rdquo; in<span>&nbsp;</span><em>Clinical Physiology</em><span>&nbsp;</span>15, 557&ndash;569 (1995)</p>\r\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/8590551/\">https://pubmed.ncbi.nlm.nih.gov/8590551/</a>&nbsp;</p>\r\n<p>[3] F. Riganello et al. &ldquo;How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness?&rdquo; in&nbsp;<em>Frontiers in Neuroscience</em>&nbsp;vol. 9, 461 (2015)</p>\r\n<p><a href=\"https://www.researchgate.net/publication/285592605_How_Can_Music_Influence_the_Autonomic_Nervous_System_Response_in_Patients_with_Severe_Disorder_of_Consciousness\">https://www.researchgate.net/publication/285592605_How_Can_Music_Influence_the_Autonomic_Nervous_System_Response_in_Patients_with_Severe_Disorder_of_Consciousness</a>&nbsp;</p>\r\n<p>[4] H. Marzbani et al. &ldquo;Methodological Note: Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications&rdquo; in<span>&nbsp;</span><em>Basic and Clinical Neuroscience Journal</em><span>&nbsp;</span>vol. 7, 143&ndash;158 (2016)</p>\r\n<p><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4892319/\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4892319/</a>&nbsp;</p>\r\n<p>[5] P.M. Lehrer<em><span>&nbsp;</span></em>and R. Carr &ldquo;Stress Management Techniques: Are They All Equivalent, or Do They Have Specific Effects?&rdquo; in<span>&nbsp;</span><em>Biofeedback and Self-Regulation&rdquo;</em><span>&nbsp;</span>(1994)</p>\r\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/7880911/\">https://pubmed.ncbi.nlm.nih.gov/7880911/</a>&nbsp;</p>\r\n<p>[6] J. Ehrhart, M. Toussaint, C. Simon, C. Gronfier, R. Luthringer, G. Brandenberger. &ldquo;Alpha Activity and Cardiac Correlates: Three Types of Relationships During Nocturnal Sleep&rdquo; in<span>&nbsp;</span><em>Clinical Neurophysiology</em><span>&nbsp;</span>vol. 111, 940&ndash;946 (2000)</p>\r\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/10802467/\">https://pubmed.ncbi.nlm.nih.gov/10802467/</a>&nbsp;</p>\r\n<p>[7] Amy L. Proskovec, Alex I. Wiesman, and Tony W. Wilson. &ldquo;The Strength of Alpha and Gamma Oscillations Predicts Behavioral Switch Costs&rdquo;, Neuroimage, 274&ndash;281 (2019 Mar; 188)</p>\r\n<p><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6401274/\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6401274/</a></p>\r\n<p>[8] J. Cage, R. Kostelanetz.<span>&nbsp;</span><em>John Cage Writer: Previously Uncollected Pieces</em>.&nbsp;</p>\r\n<p>New York: Limelight (1993)</p>\r\n<p>[9] B. Lutters, P. J. Koehler. &ldquo;Brainwaves in Concert: the 20th Century Sonification of the Electroencephalogram&rdquo; in<span>&nbsp;</span><em>Brain</em><span>&nbsp;</span>139 (Pt 10), 2809&ndash;2814 (2016)</p>\r\n<p><a href=\"https://academic.oup.com/brain/article/139/10/2809/2196694\">https://academic.oup.com/brain/article/139/10/2809/2196694#</a>&nbsp;</p>\r\n<p>[10] A Matthews, &ldquo;The Berger Rhythm: Potential Changes From The Occipital Lobes in Man&rdquo; in<span>&nbsp;</span><em>Brain<span>&nbsp;</span></em>57 Issue 4, (December 1934)</p>\r\n<p><a href=\"https://academic.oup.com/brain/article/133/1/3/314887\">https://academic.oup.com/brain/article/133/1/3/314887</a>&nbsp;</p>\r\n<p>[11] M Atkinson, MD, &ldquo;How To Interpret an EEG and its Report&rdquo; (2010)</p>\r\n<p><a href=\"https://neurology.med.wayne.edu/pdfs/how_to_interpret_and_eeg_and_its_report.pdf\">https://neurology.med.wayne.edu/pdfs/how_to_interpret_and_eeg_and_its_report.pdf</a>&nbsp;</p>\r\n<p>[12] E.R. Miranda. &ldquo;Brain&ndash;Computer Music Interfacing: Interdisciplinary Research at the Crossroads of Music, Science and Biomedical Engineering&rdquo; in E.R. Miranda, J. Castet, ed.<span>&nbsp;</span><em>Guide to Brain-Computer Music Interfacing</em>. London: Springer-Verlag, 1&ndash;27 (2014)</p>\r\n<p><a href=\"https://www.researchgate.net/publication/312797756_Brain-Computer_Music_Interfacing_Interdisciplinary_Research_at_the_Crossroads_of_Music_Science_and_Biomedical_Engineering\">https://www.researchgate.net/publication/312797756_Brain-Computer_Music_Interfacing_Interdisciplinary_Research_at_the_Crossroads_of_Music_Science_and_Biomedical_Engineering</a>&nbsp;</p>\r\n<p>[13] P.M. Lehrer<em><span>&nbsp;</span></em>et al<em>.</em>&nbsp;&ldquo;Relaxation and Music Therapies for Asthma among Patients Prestabilized on Asthma Medication&rdquo;&nbsp;in<span>&nbsp;</span><em>Journal of Behavioral Medicine</em>&nbsp;17,&nbsp;1&ndash;24 (1994)</p>\r\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/8201609/\">https://pubmed.ncbi.nlm.nih.gov/8201609/</a>&nbsp;</p>\r\n<p>[14] [10] P. M. Lehrer, R. Gevirtz. &ldquo;Heart Rate Variability Biofeedback: How and Why Does It Work?&rdquo; in<span>&nbsp;</span><em>Frontiers in Psychology</em><span>&nbsp;</span>vol. 5, 756 (2014)</p>\r\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/25101026/\">https://pubmed.ncbi.nlm.nih.gov/25101026/</a><a href=\"https://pubmed.ncbi.nlm.nih.gov/25101026/\"></a></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1777,
                "name": "Alphawave",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1778,
                "name": "Alphawave training",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 562,
                "name": "Bcmi",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 559,
                "name": "Brainwaves",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 277,
                "name": "Max 8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1779,
                "name": "Synthesizer",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20945,
            "forum_user": {
                "id": 20934,
                "user": 20945,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Tomasiello-modular_01b.png",
                "avatar_url": "/media/cache/8e/26/8e262109aba7469cf1a5c6158552e9f8.jpg",
                "biography": "Johnny Tomasiello is a multidisciplinary artist and composer-researcher, with a deep interest in expanded conceptualizations of sound, visuals, and time. His work employs methodologies across media, and is informed by research into neuroscience, psychophysics and biofeedback.  \n\nFocused on the relationship between perception and the mechanics of physiology, his immersive works, compositions, and performances reveal otherwise invisible processes in physiological and technological systems. Drawing on custom-built instruments and software, his work references mechanisms of expression and experience through data sonification, biofeedback, and reciprocal physiological systems.\n\nAs a performer, Tomasiello has produced live immersive performances and lectures featuring his interactive computer-assisted compositional performance systems and Brain-Computer Interfaces (BCI) that create, manipulate, and deconstruct audio and visuals, as well as physiological responses. He has lectured on the subject, staged live performances, scored films, and shown canvases and sound works in galleries and at institutions in the US and abroad.",
                "date_modified": "2026-02-12T19:09:20.143419+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "johnnytomasiello",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "deriving-synchrony-a-real-time-interactive-brainwave-to-music-translation-performance-system",
        "pk": 2731,
        "published": true,
        "publish_date": "2024-02-14T17:45:29+01:00"
    },
    {
        "title": "Towards personal spatial sound systems",
        "description": "Whether you’re an artist searching for new ways to work with space and sound, or you run an art center, research lab, residency or a festival looking to expand what you can offer, the landscape of spatial sound is changing. This article is about those changes, the challenges many of us face, and a new approach leaving our Spatial Sound R&D Lab and entering the world — codenamed the \"Singing Watermelons\" — inviting the curious minds and unstoppable artists to help shape what comes next.",
        "content": "<h2><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/595b142db75a2b4233f1227768f86707.jpg\" /></h2>\r\n<h2>Spatial sound reality &mdash; access is still an exception</h2>\r\n<p>No sugarcoating: for most artists, access to spatial sound is quite limited. You don't own an immersive sound studio or a beamformer. Instead, you apply for a grant or a residency, and if you&rsquo;re lucky, you get a few days with a system&nbsp; &mdash;&nbsp; just enough to get lost in the manual and maybe try out an idea or two. Every instrument is a new learning curve, and by the time you start to get comfortable, your slot is up and you&rsquo;re back to headphones and stereo mixes, or at best vague approximations.</p>\r\n<p>&nbsp;</p>\r\n<p>Institutions face a different but related challenge. You might have a dome or a multi-channel array, but these are often tied to a single space and require significant investment. This limits how many artists you can host, and what kinds of projects are possible. What if you could add a spatial sound instrument that doesn&rsquo;t need its dedicated, permanent, acoustically treated studio, that&rsquo;s easy to move, and that lets you take projects out into the world &mdash; into galleries, public spaces, or wherever people actually are? That&rsquo;s the gap we&rsquo;re hoping to fill.</p>\r\n<p>&nbsp;</p>\r\n<p>Meanwhile, the traditional paths for composers and musicians have only gotten tougher. Each year, thousands of students graduate from music programs in Germany &mdash; industry estimates often cite figures upwards of 8,000 &mdash; while the number of full-time orchestral vacancies remains in the low hundreds or fewer. This gap is widely discussed in music education circles and reported by professional associations, though precise annual figures are not always published.</p>\r\n<p>&nbsp;</p>\r\n<p>Entertainment and pop music world is no different - everyone's eyes are on stars like Taylor Swift, or self-made revelations like Jacob Collier, promising a vibrant career within reach. But for 99% of artists the reality is harsh &mdash; with over 60,000 new tracks that are uploaded to Spotify every day, most artists see mere pennies per play and struggle to launch their careers. If the old &ldquo;stereo&rdquo; world is dying, why is it still so crowded? Inertia is one thing, lack of access to tools that could set you apart &mdash; like spatial sound &mdash; used to be the other.</p>\r\n<p>But this is changing, and new interesting careers in sound are opening for those willing to take a step away from the beaten path.</p>\r\n<p>&nbsp;</p>\r\n<h2>Why I got obsessed (and why you might, too)</h2>\r\n<p>&nbsp;</p>\r\n<p>My journey into spatial sound began with curiosity and a sense of possibility. For two decades, I worked at the intersection of technology, art, and space &mdash; mostly on the visual side. But after reaching many of my goals there, and anticipating the tectonic shifts sweeping through the industry, I made a conscious, deliberate decision to leave the now-colonized world of visuals and venture into the less-explored territory of spatial sound.</p>\r\n<p>In filmmaking, we often say that \"Sound is 50% of the experience\" &mdash; I used to take it with no grain of salt. But perhaps it isn't?</p>\r\n<p>Today, I am convinced it's more. Every time I encountered a truly immersive experience &mdash; like the 4D Sound system at Monom in Berlin, or Gerriet Sharma&rsquo;s work with beamforming arrays at Spaes &mdash; I left wanting more. Not just as a listener, but as a maker.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/31ef984baae5d964d62fb6b74df458c4.jpg\" /></p>\r\n<p>What struck me most wasn&rsquo;t just the technology, but how these systems could make you feel like you were listening to space and sound itself, not just to speakers.</p>\r\n<p>Yet, these experiences used to be rare, locked away in specialized studios and research labs, accessible only for the luckiest or most determined few. I wanted to change that. (If you&rsquo;re curious about the deeper story and the ideas that shaped this project, I wrote more in <a href=\"https://slovox.design/a-the-futures-of-sound\">The Futures of Sound</a>.)</p>\r\n<h2>The Lighthouse of Sound - ambisonic arrays turned inside out</h2>\r\n<p>Let&rsquo;s talk about the instrument itself, and why it&rsquo;s different. Traditional ambisonic domes or surround sound systems work by surrounding the listener with speakers, projecting sound inward. It&rsquo;s a bit like sitting in the center of a planetarium &mdash; if you&rsquo;re in the &ldquo;sweet spot,&rdquo; you get a somewhat convincing sense of spatiality. But step outside that zone, and it&rsquo;s just a bunch of speakers.</p>\r\n<p>Spherical beamforming arrays &mdash; what we&rsquo;re building &mdash; flip this idea inside out. Imagine a lighthouse, but instead of casting light, it projects beams of sound outwards into the room. These beams bounce off walls, ceilings, and bodies, filling the space in unpredictable, lively ways. You don&rsquo;t need a perfectly treated studio. In fact, reverberant spaces &mdash; churches, small halls, even industrial sites &mdash; become collaborators rather than obstacles.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/58c67277542a8c16636d58c0fc3ebd01.jpg\" /></p>\r\n<p>This isn&rsquo;t about isolating the listener in a pristine bubble of idealized &ldquo;reference sound&rdquo;. Rather, it&rsquo;s about social listening: bringing people together, letting them move around, and inviting the architecture itself to shape the experience. The spherical, &ldquo;totemic&rdquo; design isn&rsquo;t just practical &mdash; it&rsquo;s symbolic. It organizes space, draws people in, and offers a focal point for shared exploration.</p>\r\n<h2>\"Spherical beamforming what&hellip;?\"</h2>\r\n<p>Let&rsquo;s break it down. For me, this project started as the \"Quest for the Holy Grail\" &mdash; understanding, and building myself a holosonic projector, something that could precisely project beams of sound across the full frequency range and allow us to explore sound as space and in relation with architecture. That&rsquo;s beamforming &mdash; the pinnacle of spatial sound control, once envisioned at IRCAM, later pioneered by <a href=\"https://iem.kug.ac.at/\">IEM</a> over many years of research.</p>\r\n<p>But as the project progressed, and as I spoke with spatial sound practitioners around the world, I realized people wanted to use &ldquo;Singing Watermelons&rdquo; in all kinds of scenarios &mdash; and the instrument was already capable of much more than I first imagined.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5f68620f858a13f67a2384cb3775cf5b.jpg\" /></p>\r\n<p>So what is it? It&rsquo;s a bunch of speakers in a compact enclosure, if you like. You can feed them a single signal, if you want. But that&rsquo;s boring.</p>\r\n<p>It&rsquo;s a bunch of speakers &mdash; and each one can be individually controlled. Use it as a multi-channel instrument. Take advantage of intuitive workflows you already know in your DAW, with Max, or Supercollider. At this stage, it&rsquo;s already opening up a vast range of possibilities in electroacoustic, acousmatic, or just electronic composition. Use it in live acts, installations, or theatre. I keep getting surprised.</p>\r\n<p>But we can do much more &mdash; this &ldquo;bunch of speakers&rdquo; can be treated as a precise sound radiator, used to decode and diffuse higher-order ambisonic signals, for those who work in or believe in that format.</p>\r\n<p>And as you move towards more sophisticated usage, this &ldquo;bunch of speakers&rdquo;, combined with beamforming workflows, becomes an ultra-precise holosonic projector &mdash; a lighthouse of sound, animating evolving sonic sculptures that are interwoven with the architecture and surroundings.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8f672e5cf72d41ac44e5c9fae4e10097.jpg\" /></p>\r\n<p>Here, the room &mdash; with all its acoustic imperfections &mdash; is not an enemy anymore. Instead, it becomes a collaborator, inviting a much more organic and creative approach than what years of Hi-Fi marketing have conditioned us to chase: the myth of \"pristine, reference sound\". Isn&rsquo;t that, in itself, an elegant answer to the endless debate between the prohibiting &ldquo;sound engineer&rsquo;s ears&rdquo; and creativity-seeking &ldquo;artist&rsquo;s ears&rdquo;?</p>\r\n<p>Lastly, beamforming (and higher-order ambisonics), when paired with a central spherical loudspeaker, remain the Wild West of sound exploration &mdash; only a few dozen compositions have ever been created for systems like this. And that&rsquo;s exactly what makes it so exciting.</p>\r\n<p>&nbsp;</p>\r\n<h2>For Institutions: more artists, more places</h2>\r\n<p>For institutions, adding a spherical spatial sound instrument to your toolkit doesn&rsquo;t mean tearing down walls or building a new studio. It&rsquo;s a way to complement what you already have &mdash; hosting more residencies, inviting more artists, and even taking projects out of the building and into the world. Imagine a residency where the instrument goes with the artist, or a festival where spatial sound compositions pop up in unexpected places.</p>\r\n<p>&nbsp;</p>\r\n<h2>The Singing Watermelons (and Their Friends)</h2>\r\n<p>At Slovox, the cornerstone project is the Singing Watermelons: a family of spherical multi-channel loudspeakers, each designed for versatility, affordability and portability. They fit in a couple of rack cases, and can be set up by a small team in minutes. Each instrument can project sound in multiple ways &mdash; omnidirectional, multi-channel, ambisonic, or as a beamformer &mdash; letting you experiment with how sound interacts with space.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/406fa64cff995f0f75d02dd34b0d8e4e.jpg\" /></p>\r\n<p>One of the key challenges was coming up with ways of controlling these speakers. Each channel being controlled separately, requires a full chain: from an interface with sufficient number of channels, through digital-to-analog conversion (sometimes in the interface), all the way to amplification, all while staying on budget. While there already exist high-end solutions on the market that could satisfy all that, we wanted to achieve it at a fraction of price, using off-the shelf solutions. That's the essence of our Juiceboxes &mdash; a custom amp rack with all you need to to get these things working, and without breaking the bank.</p>\r\n<p>&nbsp;</p>\r\n<h2>Why Now? Welcoming Early Abandoners</h2>\r\n<p>After a sprint-marathon of R&amp;D &mdash; learning from the research at IEM in Graz (and getting their generous, friendly support!), building prototypes, and testing &mdash; we&rsquo;re getting ready to open things up. As of this summer, we&rsquo;re inviting early abandoners (of the old world) and early adopters: artists, institutions, and anyone curious about what&rsquo;s possible when spatial sound becomes personal and portable.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a0bed4a8e45d00a51cfae6e65687e042.jpg\" /></p>\r\n<p>If you&rsquo;re tired of waiting for access, or want to offer new kinds of residencies, workshops, or public projects, let&rsquo;s talk. We&rsquo;re not trying to replace domes or compete with the big labs; we&rsquo;re building on their work, and hoping to make spatial sound a tool for more people, in more places.</p>\r\n<h2>GoSpatial 2025 &mdash; Open Call for Works &amp; Collaborations</h2>\r\n<p>We invite sound artists, composers, and sonic experimenters to set new boundaries using our new spherical spatial sound instrument, the Svarog 24 &mdash; part of the &ldquo;Singing Watermelons&rdquo; family.&nbsp;Bring your sound into new sphere, and win a 1-week collaborative residency in our wilderness studio, south-west Poland, or one of 3 workshop weekends to develop your piece for our new instrument. Selected works join our new spherical speaker library &mdash; possible royalties &amp; showcases.</p>\r\n<p>Deadline: 2025/08/03 (Priority) / 2025/08/08 (late)</p>\r\n<p><a href=\"https://slovox.design/go-spatial-2025\">Submit your proposal</a></p>\r\n<p>&nbsp;</p>\r\n<h2>Who</h2>\r\n<p><a href=\"https://kizny.com\">Patrick Kizny</a> (⇀ <a href=\"https://instagram.com/spatial.dude\">Instagram</a>) is a creative entrepreneur, artist, and recently &mdash; the founder of Slovox, known for his work at the crossroads of art, technology, and design. With over two decades of experience leading innovative projects in film, visual effects, and creative technology, he now focuses on pioneering new musical ventures and spatial sound instruments that challenge conventions and invite new ways of listening.</p>\r\n<h2>Slovox</h2>\r\n<p><a href=\"https://slovox.design\" title=\"Slovox - Spatial Sound R&amp;D Studio\">Slovox</a> is an R&amp;D lab dedicated to designing and building spatial sound instruments, systems, and installations. By making advanced spatial audio technology accessible to artists, researchers, and institutions, Slovox empowers creative exploration and collaboration &mdash; enabling new forms of social listening and sonic experience.</p>\r\n<h2>The Last Ear</h2>\r\n<p><a href=\"https://thelastear.org\">The Last Ear</a> is a platform committed to restoring the art of listening in an age of engineered distraction. Home to Slovox and the Echotronica festival, it unites art, technology, and radical attention to foster connection, creative resistance, and a community dedicated to advancing the art and science of listening.</p>",
        "topics": [
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3139,
                "name": "beamforming",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3141,
                "name": "industry",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3138,
                "name": "spatial sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3140,
                "name": "spherical arrays",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 88588,
            "forum_user": {
                "id": 88481,
                "user": 88588,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Paco_Watermelon.JPG",
                "avatar_url": "/media/cache/5f/fc/5ffcaa0bd30b52186ce7e95c24b542d0.jpg",
                "biography": "Creative entrepreneur. Now building “Singing Watermelons”\nFounder of Slovox - R&D Lab for Spatial Sound.",
                "date_modified": "2025-07-22T11:46:24.244801+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "pacocreative",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "towards-personal-spatial-sound-systems",
        "pk": 3555,
        "published": true,
        "publish_date": "2025-07-16T17:49:58+02:00"
    },
    {
        "title": "SONIFYING CHEMICAL EVOLUTION - Compositional Strategy in FIRST LIFE by Steve Everett",
        "description": "This presentation discusses the creation of FIRST LIFE, a 75-minute mixed media performance for string quartet, live audio and motion capture video, and audience participation utilizing stochastic models of biochemical data provided by the Grover Research Group at the Georgia Institute of Technology, USA. Each section of this work is constructed from contingent outcomes drawn from research exploring possible early Earth formations of organic compounds",
        "content": "<div>This presentation discusses the creation of FIRST LIFE, a 75-minute mixed media performance for string quartet, live audio and motion capture video, and audience participation utilizing stochastic models of biochemical data provided by the Grover Research Group at the Georgia Institute of Technology, USA. Each section of this work is constructed from contingent outcomes drawn from research exploring possible early Earth formations of organic compounds.</div>\r\n<div>&nbsp;</div>\r\n<div>This project created auditory models of the possible elemental and environmental conditions present in early Earth thus providing a new way to imagine the salient biochemical morphologies at play in the origins of evolution. The goal was to create both an artistically sensitive realization of the scientific data and to provide an educational opportunity for audience participants to engage with the fundamental principles of this research project into the origins of life.&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div>Data values drawn from self-organizing chemical compounds were assigned to the sonic properties of frequency, amplitude, duration, timbre, tempo, string instrument physical properties, and spatial location. The stochastic processes also contain Hidden Markov Models that embed a degree of probabilistic input from the computer-generated processing, the string quartet performers, and audience.&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div>The performance attempts to model a biological organism&rsquo;s ability to respond to the conditions of its environment and to learn from its own history. Data representation types in this composition include discrete, continuous, stochastic, and interactive forms.</div>\r\n<div>&nbsp;</div>\r\n<div>In attempting to develop a sonic platform that could contain structural dimensions of the possible chemical properties of early life and rather than begin with a traditional process of data mapping, I chose to adopt the seven essential principles of life as outlined by UC-Berkeley biochemist Daniel E. Koshland for the construction of the work.</div>\r\n<div>1. Program</div>\r\n<div>2. Improvisation &ndash; describes the possibility that a system can change its program in order to adapt to new environmental conditions</div>\r\n<div>3. Compartmentalization</div>\r\n<div>4. Energy</div>\r\n<div>5. Regeneration - takes into account thermodynamic losses</div>\r\n<div>6. Adaptability</div>\r\n<div>7. Seclusion - &ldquo;privacy&rdquo; in the social world. This property of life makes it possible for biochemical processes to take place independently in cells without disturbing one another.&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div>Audio-visual programs used in the composition and live performance of the work are MAX, Kyma, Isadora, AudioSculpt, Spat, OpenMusic, and Modalys. &nbsp;The live motion capture video system uses two Microsoft Kinect.</div>\r\n<div></div>\r\n<div>For this talk Steve Everett use :</div>\r\n<div></div>\r\n<div>\r\n<div><span lang=\"EN-US\"><a href=\"https://forum.ircam.fr/collections/detail/openmusic-world/\">OpenMusic</a>, <a href=\"https://forum.ircam.fr/projects/detail/panoramix/\">Spat,</a> Modalys, IRCAM Max objects, Orchidea, Antescofo</span></div>\r\n<div><span></span></div>\r\n<div></div>\r\n</div>\r\n<div>\r\n<h3><strong>This talk will be presented during the Ircam Forum Workshop in Seoul at the Seoul National University&nbsp;</strong></h3>\r\n<h3><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-seoul-6-8-november-2024/\">More info on the event</a></h3>\r\n</div>\r\n<div><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/61ac0b57481d3b9575af6c369746fd3d.png\" /></div>\r\n<p>&nbsp;</p>\r\n<div></div>",
        "topics": [
            {
                "id": 253,
                "name": "Composition Assistée par Ordinateur",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 703,
                "name": "Sonification",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 337,
            "forum_user": {
                "id": 337,
                "user": 337,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/SE-Seoul-a.jpg",
                "avatar_url": "/media/cache/fc/cb/fccb9dfc06ebcc4a6967dc6b114aba6e.jpg",
                "biography": "Steve Everett is professor of music at the City University of New York (CUNY) Graduate Center where he also served as Provost. He has also been a professor at Emory University, visiting professor at Princeton University, and a guest composer at the Conservatoire National Supérieur de Musique de Paris, the Conservatoire de Musique de Genève in Switzerland, Rotterdam Conservatoire, HKU Utrechts Conservatorium, Tokyo Denki University, and Eastman School of Music.\n\nHis compositions involve performers with live electronics and have been performed throughout Europe and Asia, including at IRCAM and INA-GRM Radio France, Re-New Arts Festival (Copenhagen), Orgelpark (Amsterdam), the Esplanade (Singapore), Korea Computer Music Festival, Manchester (England), Cologne (Germany), and Resonances Arts Festival (Paris). He has received composition awards from the Asian Cultural Council, Chamber Music America, and has been a senior research fellow at the Rockefeller Study Center in Bellagio, Italy.\n\nHis doctorate in composition is from the University of Illinois. He also studied composition with Sir Peter Maxwell Davies and Witold Lutosławski in England and conducting with Pierre Boulez in NYC.",
                "date_modified": "2026-03-03T22:22:43.252807+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 46,
                        "forum_user": 337,
                        "date_start": "2012-11-22",
                        "date_end": "2026-11-03",
                        "type": 0,
                        "keys": [
                            {
                                "id": 526,
                                "membership": 46
                            },
                            {
                                "id": 527,
                                "membership": 46
                            },
                            {
                                "id": 567,
                                "membership": 46
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "SteveEVERETT",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sonifying-chemical-evolution-compositional-strategy-in-first-life-steve-everett",
        "pk": 3013,
        "published": true,
        "publish_date": "2024-10-09T10:24:22+02:00"
    },
    {
        "title": "Audiovisual XR scenographies of transactions and speculative AI-mutations by Jānis Garančs",
        "description": "audiovisual installation with stereoscopic 3D projection, showcasing two investigations of analytic and affective aspects of immersion: 1) staging of real financial transaction data 2) fictional narrative in a latent space of AI-aided spatial scenery",
        "content": "<div>\r\n<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\nThis audiovisual installation combines stereoscopic 3D projection and multi-channel sound to showcase two projects that explore spatiality as an analytic and affective storytelling vehicle: one departing from data-driven analysis and the other through an evocative speculative narrative.</div>\r\n<div><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d1b58501acd355ccc1e725d9fb1bd344.jpg\" /></div>\r\n<div>&nbsp;</div>\r\n<div>The first project \"Rhapsodic Statistics\" stages archived and real-time financial transaction data&mdash;sourced primarily from cryptocurrency exchanges&mdash;as an immersive audiovisual experience. By translating multiple datasets into a dynamic XR scenography, this installation invites participants to experience the narrative flow transitioning from an analytic overview and comparison to an affective endo-perspective where data becomes viscerally embodied. Thematically, the artwork critically reflects on the gambling factor in global economic processes and the trends of &lsquo;gamification&rsquo; in trading experiences for the masses.</div>\r\n<div><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/83c7aaf04b1b6e657af1ebdff21b4752.jpg\" /></div>\r\n<div>&nbsp;</div>\r\n<div>The second project, based on the multi-channel soundpiece \"Latent Dictatorship Spirals\", with additional 3D visuals, transports viewers into a fictional narrative, tracing the life-cycle of a virus-like AI entity as it takes root in a virtual habitat. This speculative journey oscillates between chaos and purpose, mirroring an evolutionary arc&mdash;from primal disorder to the seductive, rigid rule-sets of an artificial existence, and finally to self-imposed destruction. Enhanced by generative sound layers featuring AI-created chords, voices and sonic trajectories of sampled screeches from military attack drones in Ukraine war videos, the installation tries to weave a tapestry of euphoria, awe, delusion, and trepidation. While immersed&nbsp; in the entity&rsquo;s haunting transformations, spectators can speculatively associate it with the cyclic or spiralling societal developments and technological transitions of the last 100 years.</div>\r\n<div></div>\r\n<div><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></div>",
        "topics": [
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3232,
                "name": "immersive analytics and affect",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3231,
                "name": "stereoscopic 3D",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 126732,
            "forum_user": {
                "id": 126565,
                "user": 126732,
                "first_name": "Jānis",
                "last_name": "Garančs",
                "avatar": "https://forum.ircam.fr/media/avatars/Garancs.jpg",
                "avatar_url": "/media/cache/c2/5a/c25aa43d137609ed0737523f55889dd4.jpg",
                "biography": "With a foundation in classical fine arts and music in Riga, Latvia, Jānis Garančs went on to specialise in video and computer art at the Royal Institute of Art (KKH) in Stockholm, Sweden, and digital audiovisual media at the Academy of Media Arts (KHM) in Cologne, Germany.\n\nSince 2000, his creative practice has focused on interactive multimedia installations, virtual and extended reality (VR/XR), and audiovisual performances. His work has been showcased at international festivals and conferences, including Ars Electronica, ISEA, Transmediale, and RIXC Art and Science Festival. He has received several artist-in-residence grants, such as those from SAT (Montreal), V2_Lab (Rotterdam), and EFFEA (European Festivals Fund for Emerging Artists).\n\nGarančs is a co-founder and board member of RIXC — the Riga Center for New Media Culture. Currently he is a PhD candidate at RTU Liepāja, part of Riga Technical University in Latvia, and a visiting researcher at Aalto Studios / Aalto University in Finland.",
                "date_modified": "2026-03-18T03:46:02.980287+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1215,
                        "forum_user": 126565,
                        "date_start": "2025-10-07",
                        "date_end": "2026-10-07",
                        "type": 0,
                        "keys": [
                            {
                                "id": 1085,
                                "membership": 1215
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "garancs",
            "first_name": "Jānis",
            "last_name": "Garančs",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3593,
                    "user": 126732,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "audiovisual-xr-scenographies-of-transactions-and-speculative-ai-mutations-by-janis-garancs",
        "pk": 3593,
        "published": true,
        "publish_date": "2025-07-31T19:07:05+02:00"
    },
    {
        "title": "Immersive audio in Latin America does not have a talent problem. It has a disconnect from our technical reality.",
        "description": "The article presents data-driven research (based on a survey of 32 active professionals in LATAM) that challenges the common narrative that immersive audio fails due to a lack of creativity or access to cutting-edge technology. The author argues that the real obstacle is a structural disconnect: current workflows and validation systems were designed for realities far removed from Latin American infrastructure, forcing engineers to operate in a state of \"constant technical negotiation\" and \"digital survival.\"",
        "content": "<div>&lt;header class=\"lqd-post-header entry-header\"&gt;\n<h1><strong><span style=\"\"><em>Author<span><a href=\"https://solrezza.net/en/author/sol/\" title=\"Sol Rezza\">Sol Rezza &nbsp;</a></span>Published on:<a href=\"https://solrezza.net/en/immersive-audio-in-latam-doesnt-have-a-talent-problem-it-has-a-systemic-design-problem/\" style=\"\">&lt;time class=\"entry-date published\" datetime=\"2026-01-07T12:46:30-03:00\"&gt;07/01/2026 &lt;/time&gt;</a><br></em></span></strong></h1>\n&lt;/header&gt;</div>\n&lt;article id=\"post-18539\" class=\"lqd-post-content pos-rel post-18539 post type-post status-publish format-standard has-post-thumbnail hentry category-latam-immersive-audio-field-notes tag-adoption-strategy tag-audio-innovation tag-audio-validation tag-creative-projects tag-field-notes tag-immersive-audio tag-immersive-project-management tag-latam-latin-america tag-professional-challenges tag-real-world-testing tag-sound-industry tag-technology-adoption tag-workflow-design\"&gt;\n<div>\n<div>\n<div>\n<div>\n<div>\n<div>\n<h4 style=\"text-align: center;\"><em>Immersive audio in Latin America doesn&rsquo;t fail due to a lack of talent or technology.</em><br><em><span style=\"\">It fails because validation processes, monitoring, and workflows were never designed for our context.<br><br><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/fed78ee974915a09e60b5b5910915f6d.jpg\"><br></span></em></h4>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<div>\n<p>I recently launched an ongoing, targeted survey focused on active immersive audio practitioners across LATAM to map how technical decisions are validated under real world conditions.</p>\n<p>This article presents an initial diagnosis based on 32 specialized respondents. While the data collection remains open, the high recurrence of specific workflows, infrastructure constraints, and technical frictions already reveals robust and consistent patterns that define the regional landscape.</p>\n<p>The data confirms that working with immersive audio in Latin America is not a creative problem, but a constant technical negotiation.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/92bc43016b3ee80fcdba1c32d19fef82.jpg\"></p>\n</div>\n</div>\n<div>\n<div>&nbsp;</div>\n</div>\n<div>\n<div>\n<ul>\n<li><strong>50% of professionals access a physical immersive monitoring room less than once a year.<br></strong>This disconnection from the real acoustic space creates a fragile validation scenario, where the absence of direct and verifiable references turns binaural monitoring into the only available option.<br><br></li>\n<li>\n<p><strong>Immersive audio infrastructure in LATAM is a house of cards.<br></strong>Its fragility is not an isolated phenomenon, but a structural condition that runs through the entire workflow and shapes every technical decision from the very beginning of the process.</p>\n</li>\n<li><strong>Working with generic HRTF models is equivalent to processing sound through someone else&rsquo;s morphology.<br></strong>The lack of individualized profiles and the lack of&nbsp;<em>head tracking</em>&nbsp;systems make spatial perception imprecise and generate cognitive fatigue that compromises technical judgment.</li>\n</ul>\n</div>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<p>This analysis does not come from theory, but from direct observation of the infrastructure in Latin America. It is not an opinion, but a diagnosis of how the system actually operates.</p>\n<p>When validation frameworks are fragile, technical decisions become indefensible, and what cannot be defended cannot scale. There is a widespread confusion between experimentation and real technical validation.</p>\n<p>The professional&rsquo;s insecurity does not stem from a lack of knowledge, but is the logical response to tools that were never designed for their reality.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<div>\n<div>&nbsp;</div>\n</div>\n<div>\n<div>\n<blockquote>\n<div style=\"text-align: center;\">\"I created a piece for a 93.5 channel dome, and the most difficult part when testing it in the space was finding front and back. Even though everything was positioned in the software, the sense of distance sounded extremely reduced and the orientation was confusing.\"</div>\n</blockquote>\n</div>\n</div>\n<div>\n<div>&nbsp;</div>\n</div>\n</div>\n<div>\n<div>\n<h3><span style=\"text-decoration: underline;\">The Validation Limbo</span></h3>\n</div>\n<div>\n<p>In Latin America, most artists and professionals work under a structural limitation that reduces access to the final exhibition space to just a few minimal instances before the premiere, as they operate primarily from their own studios on projects intended for domes or immersive theaters abroad.</p>\n<p>This physical disconnection forces reliance on generic binaural rendering engines, which, lacking calibrated references, lose resolution and technical precision.</p>\n<p>When there is no real acoustic environment, perception is replaced by a form of sensory substitution. Instead of trusting what we hear, we begin to trust what we see on the screen, relying on the visual information and data provided by the rendering engine.</p>\n<p>In this scenario of acoustic myopia, mixing ceases to be an aesthetic decision and becomes an exercise in data management, prioritizing visual coherence over sonic intent.</p>\n<p>This dynamic undermines trust in one&rsquo;s own listening and makes professionals excessively dependent on what the software claims is happening.<br><br>To regain stability in the process, it is essential to establish constant reference points and rely on measurement tools that act as anchors to reality.</p>\n<p>However, the most valuable resource is strengthening the technical dialogue between those who design the content and those who operate the physical space, because only through fluid communication about routing and the room&rsquo;s real acoustic response is it possible to transform software uncertainty into solid professional decisions.</p>\n<p>For software development and space management industries, this scenario represents a critical design opportunity.</p>\n<p>The challenge is not only to improve rendering but also to create bridges that translate the complexity of the physical space to the workstation.</p>\n<p>Integrating specific acoustic profiles for each room and automated communication protocols would allow companies to provide the technical stability that professionals are currently forced to improvise on their own to prevent the collapse of the spatial image.<br><br></p>\n</div>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<div>\n<h3><span style=\"text-decoration: underline;\">The Fragility of the Ecosystem</span></h3>\n</div>\n<div>\n<p>This pattern represents a statistical reality that reveals a marked segmentation within the community.</p>\n<p>While one sector manages to sustain a stable workflow,&nbsp;<strong>the data shows a deep technical gap in which 42.8% of professionals describe their routing configuration as a persistent struggle, ranging from moderately to extremely difficult.</strong></p>\n<p>In this context of instability,&nbsp;<strong>64.3% of respondents choose Reaper</strong>&nbsp;not simply as a creative preference, but as a fundamental strategy for technical survival.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/09488c2b2712c4a9dde14efd65658e9b.jpg\"></p>\n<p>This choice is largely driven by the lack of efficient native routing protocols in the dominant operating systems.</p>\n<p>With half of the community using Windows as their main platform and the rest distributed across various versions of macOS and Linux, multichannel audio management depends on external bridges such as ASIO Link Pro, VB-Audio, QJackCtl, Blackhole, or Loopback.</p>\n<p>Working under this scheme means operating without technical determinism. The audio chain becomes dependent on third-party drivers remaining updated and compatible with constant changes in rendering engines and operating system updates.</p>\n<p>Systemic instability affects the professional&rsquo;s workflow. Due to the fragility of the routing, technical experimentation is avoided in order not to risk session stability, as any technical failure interrupts the day&rsquo;s work.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2c747005240c547c19dc052792ea4654.jpg\"></p>\n<p>In Latin America, configuring an immersive working environment is, above all, a continuous exercise in risk management, where technological stability is the scarcest resource and where the success of a mix often depends on the validity of an intermediate software patch.<br><br></p>\n</div>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<div>\n<h3><span style=\"text-decoration: underline;\">The HRTF Veil</span></h3>\n</div>\n<div>\n<p>This psychoacoustic myopia manifests as a technical contradiction that the data reveals with clarity.</p>\n<p><strong>Although 57.1% of professionals use spatialization tools primarily for their ease of use, a large portion report critical difficulties in accurately perceiving elevation and distance.</strong></p>\n<p><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3ababa9daed73dc2c80c161d0216119e.jpg\"></strong></p>\n<p>The problem lies not only in the software, but in the imposition of an average statistical model onto individual physiology.</p>\n<p>By operating without personalized HRTF profiles or head-tracking systems, the brain enters a state of auditory asynchrony. A dissonance emerges between what the visual interface indicates and what the cognitive system is actually able to decode.</p>\n<p>This disconnection is deepened by a revealing finding from the survey; most users have no technical information about the HRTF profile they are using.</p>\n<p>References such as the Neumann KU100 remain abstract concepts for professionals working with panning plugins, since they lack access to the original microphone or to the physical experience of that capture.</p>\n<p>Given that access to tools for generating an individualized HRTF is almost nonexistent for the average user, and that there are no simple workflows to capture one&rsquo;s own acoustic biometrics, professionals are forced to work with borrowed hearing.</p>\n</div>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<p>Without knowing which filter is being applied to their own perception, control over the sound translation chain is lost. This lack of technical transparency is compounded by a systemic absence of standards, where no unified protocols or clear formats exist to define how to resolve localization inconsistencies in elevation and distance. While the horizontal plane enjoys a relative level of technical maturity, spatial decisions outside that axis lack a shared frame of reference.</p>\n<p>Today, positions above and below the listener remain in a gray zone where each manufacturer applies proprietary and closed algorithms.<br><br>For the industry, this opacity represents a design opportunity. Opening these algorithms and enabling accessible tools for personalized capture would allow binaural to evolve from a generic simulation into a professional grade monitoring tool.<br><br>Only through transparency in these processes can the engineer regain authority over their own listening and ensure that what is designed in the virtual environment translates faithfully into the physical space.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<div>&nbsp;</div>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<h3><span style=\"text-decoration: underline;\">The Technical Cost of Lacking Standards</span></h3>\n</div>\n<div>\n<p>The spatial uncertainty reported by professionals is the result of structural gaps that affect the predictability of current workflows.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<p><strong>Inconsistency between rendering engines represents the first technical conflict.</strong><br><br>Although the ADM standard defines object positioning, there is no shared reference for timbral response. This lack of unified criteria causes the same object to sound different depending on the software being used and forces the engineer to compensate for system deviations instead of making creative decisions.</p>\n<p><strong>The implementation barrier of the SOFA format deepens this lack of coherence within digital production environments.</strong></p>\n<p>Although this standard was created to universalize HRTF profiles, its integration into DAWs remains unintuitive and lacks accessible tools for capturing one&rsquo;s own acoustic biometrics. Without simplified loading protocols, the industry remains tied to generic profiles that impose an external auditory morphology and undermine monitoring precision.</p>\n<p><strong>The cone of confusion and artificial compensation emerge as the final consequence of this perceptual mismatch.</strong><br><br>The data reveals a technical drift in which, due to the absence of standards for handling the Blauert bands, professionals resort to excessive reverberation to force a sense of spatiality that the system cannot natively guarantee. Only through protocols that prioritize human perception will it be possible to transform this instability into true technical sovereignty for the professional.<br><br></p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\n<div>\n<h3><span style=\"text-decoration: underline;\">Toward a Diagnosis of Technical Sovereignty</span></h3>\n</div>\n<div>\n<p>This article does not seek to offer a conclusion, but rather a first approach to the state of regional infrastructure.</p>\n<p>The data presented reflects an initial phase focused on the most visible structural failures, such as physical validation, ecosystem fragility, and psychoacoustic uncertainty. Critical dimensions still remain to be analyzed, including educational gaps, distribution limitations, and the economic cost of operating without shared standards.</p>\n<p><strong>The problem does not lie in a lack of professional competence, but in a profound asymmetry between software design and our operational reality.</strong></p>\n<p>What clearly emerges is that specialists make decisions within technical frameworks that prevent those decisions from being validated or transferred with precision.</p>\n<p>As long as validation is treated as an individual responsibility instead of an infrastructure challenge, immersive audio will continue to depend on intuition and tolerance for an unstable working architecture.</p>\n<p>Naming the problem with precision is the only way to demand solutions that truly respond to our context.</p>\n<p>This research seeks to replace dependence on visual indicators with real confidence grounded in technical determinism and process transparency.<br><br></p>\n<div>\n<div style=\"text-align: center;\"><em><span>This diagnosis will continue to grow with each new experience added to the survey, and very soon I will be sharing more detailed findings so we can keep building this regional perspective together. If you would like to add your voice to the research and help complete the map of our professional reality, you can visit the link below.</span></em></div>\n</div>\n<div>\n<p style=\"text-align: center;\"><em><a href=\"https://forms.gle/BTT9C8XAU8qnKpjs9\">https://forms.gle/BTT9C8XAU8qnKpjs9</a></em></p>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n&lt;/article&gt;",
        "topics": [
            {
                "id": 621,
                "name": "3daudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 232,
                "name": "Audio 3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2323,
                "name": "Audio Codec",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2463,
                "name": "Dolby Atmos",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4165,
                "name": "HRTF",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2341,
                "name": "immersive audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 622,
                "name": "Immersiveaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4164,
                "name": "LATAM",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4166,
                "name": "L-isa",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4168,
                "name": "sol rezza",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1169,
                "name": "SpatGris",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2249,
                "name": "spatial",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 900,
                "name": "spatialaudio ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 3138,
                "name": "spatial sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 618,
                "name": "Spatialsound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4167,
                "name": "workflow",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 180,
                "name": "Workstation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 29278,
            "forum_user": {
                "id": 29250,
                "user": 29278,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sol-Rezza-05-2024-214x300.jpg",
                "avatar_url": "/media/cache/b1/29/b12985af83e892ca90cecaaaf693b3b9.jpg",
                "biography": "Sol Rezza is an Argentinian composer, sound designer and audio engineer. Her practice incorporates experimental electronics with spatial audio to create immersive experiences for virtual ecosystems and live performances.\nCombine multilingual voice samples, granular synthesis and sequencers with open-source multichannel audio technology like the SoundSquares plug-in.\nCurrently, she is developing research on how new technologies (AI, machine learning, VR, etc.) influence the creation and production of contemporary storytelling.\nRezza's work has been shown at MUTEK Montreal (CA), MUTEK (AR/ES), CTM Festival (DE), IN/OUT Festival, Tsonami Festival (CL), BRIWF festival (BR), Simultan Festival (RO), Borealis Festival (NO), HÖRLURS Festival (SE), among others. She participated in artist residencies including the Radio Art Residency at Radio Corax (DE) Somerset House Studios Residency (UK) and Binaural Nodar Residency (PT).",
                "date_modified": "2026-02-05T19:19:13.352241+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "solrezza",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 104,
                    "user": 29278,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "immersive-audio-in-latin-america-does-not-have-a-talent-problem-it-has-a-disconnect-from-our-technical-reality",
        "pk": 4308,
        "published": false,
        "publish_date": "2026-02-03T23:00:22.448574+01:00"
    },
    {
        "title": "Personal Branding Reimagined: Blending Ancient Wisdom with Modern Identity",
        "description": "In today’s digital-first world, personal branding has evolved into more than just an online presence—it’s a reflection of who you truly are. From social media profiles to professional interactions, your brand influences how people perceive your credibility, confidence, and purpose. But while many focus on external presentation, the real foundation of a strong brand lies within.",
        "content": "<p>In today&rsquo;s digital-first world, personal branding has evolved into more than just an online presence&mdash;it&rsquo;s a reflection of who you truly are. From social media profiles to professional interactions, your brand influences how people perceive your credibility, confidence, and purpose. But while many focus on external presentation, the real foundation of a strong brand lies within.</p>\n<p>The most impactful personal brands are built on clarity and authenticity. When you understand your strengths, challenges, and life direction, your communication becomes more natural and effective. This is why many individuals are now exploring traditional knowledge systems that offer deeper self-awareness and guidance.</p>\n<p>One such powerful approach comes from the principles of Lal Kitab, a unique and practical branch of astrology known for its simplicity and effectiveness. Unlike complex astrological systems, Lal Kitab focuses on real-life solutions and easy-to-follow remedies that help individuals bring balance into their lives. This inner balance plays a crucial role in shaping how you present yourself to the world.</p>\n<p>As more people seek structured ways to learn these concepts, enrolling in a <a href=\"https://iivs.com/lal-kitab-astrology/\"><strong>Lal Kitab Course Online</strong></a> has become a popular choice. Such courses not only provide theoretical knowledge but also practical insights that can be applied in everyday life. When you start understanding your patterns and energies, you naturally make better decisions&mdash;whether in career, relationships, or personal growth.</p>\n<p>This is where platforms like IIVS are making a difference. By combining ancient wisdom with modern teaching methods, they are helping learners gain clarity in a simple and accessible way. Their approach focuses on practical application, ensuring that knowledge is not just learned but lived.</p>\n<p>A strong personal brand is not created overnight. It requires consistency, self-belief, and a clear sense of direction. When your inner world is aligned, your external image becomes more powerful and trustworthy. People connect more with authenticity than perfection, and this connection is what builds long-term influence.</p>\n<p>Another important aspect of branding is differentiation. In a crowded marketplace, standing out is essential. When you base your brand on genuine self-awareness rather than imitation, you automatically create a unique identity. Ancient systems like Lal Kitab provide a different perspective&mdash;one that helps you understand not just what to do, but when and how to do it.</p>\n<p>Moreover, gaining knowledge in this field can also open new opportunities. Many individuals are now using their understanding to guide others, whether through consultations, content creation, or coaching. This not only enhances their personal brand but also builds credibility and trust within their audience.</p>\n<p>Consistency in thoughts, actions, and communication is what strengthens a brand over time. When you are aligned internally, maintaining this consistency becomes easier. You no longer need to follow trends blindly because your direction is clear.</p>\n<p>In conclusion, personal branding today is a blend of inner clarity and outer expression. By exploring deeper knowledge systems and learning from the right platforms, you can create a brand that is both authentic and impactful. When your identity is rooted in understanding and purpose, your brand doesn&rsquo;t just grow&mdash;it resonates.</p>",
        "topics": [
            {
                "id": 4552,
                "name": "online",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 165085,
            "forum_user": {
                "id": 164849,
                "user": 165085,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/logo_dxiCrkQ.png",
                "avatar_url": "/media/cache/9c/58/9c585de341e2e3d9553301207e67857c.jpg",
                "biography": "IIVS offers insightful learning programs focused on spiritual growth and self-discovery. Our courses are designed to help learners understand deeper aspects of consciousness with clear guidance and practical methods. Through our <a href=\"#\">Past Life Regression Course</a>, students explore techniques to uncover past life experiences, recognize karmic influences, and gain clarity about present life challenges. IIVS helps learners turn this knowledge into a powerful tool for personal and spiritual development.",
                "date_modified": "2026-03-15T13:23:32.731412+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "iivs121",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "personal-branding-reimagined-blending-ancient-wisdom-with-modern-identity",
        "pk": 4595,
        "published": false,
        "publish_date": "2026-04-05T08:30:35.773794+02:00"
    },
    {
        "title": "Xp 1.30: New Features, Refined Experience",
        "description": "The latest update of Xp for Live, a spatial sound design environment based on Ircam Spat~ for Ableton Live, introduces a streamlined interface, and enhanced performance. Designed for composers, sound designers, and researchers, Xp 1.30 empowers users to push the boundaries of immersive sound composition and explore new creative possibilities.",
        "content": "<p><em>First released in 2021 on the IRCAM Forum, Xp4l&mdash;now simply known as Xp&mdash;has evolved significantly with its latest major update. Xp 1.30 now supports the latest versions of Ableton Live and Max MSP, offering improved integration, a more seamless workflow, and a host of important new features. This article will highlight the latest innovations and explore the key changes in the workflow that combine the power of Max for Live with a standalone application, providing users with an optimized and more versatile experience.</em></p>\r\n<p>&nbsp;</p>\r\n<h3><strong>A New Era in Xp Ui Design</strong></h3>\r\n<p>Xp 1.30 marks a significant departure from its previous design, addressing feedback about the earlier version, which was often described as dense and overly complex. The new update introduces a <strong>sober, elegant, and unified interface</strong> that enhances usability and creates a more intuitive working environment.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5967571ac0792832a3c861ac6aefecde.jpeg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e3dece62827e6c82f3fb93498304e23d.jpeg\" /></p>\r\n<p>&nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/25a8952dc3e4e77cff321ef5855f02f5.jpeg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>This redesign not only streamlines navigation and accessibility but also aligns seamlessly with the aesthetic of Ableton Live 12, ensuring a cohesive experience for users. The refined interface fosters greater comfort when working within the Xp environment, allowing composers, sound designers, and researchers to focus on creativity without distractions.</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>Modularity: build your environment</strong></h3>\r\n<p>With Xp 1.30, modularity takes center stage, giving users flexibility to design their creative workflows. Now composed of 12 Max for Live devices, the project embraces a modular approach, allowing users to tailor their environment to meet their specific needs.</p>\r\n<p>New devices such as <strong>xp.source.anime, xp.source.osc, and xp.rec</strong> expand the possibilities for sound generation, data flow control, and recording. Whether you&rsquo;re focusing on dynamic source control, advanced oscillation, or streamlined recording capabilities, these devices enable a user to scale and adapt the setup as a project evolves.</p>\r\n<p style=\"padding-left: 200px;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/12c2254a60408d276778808128ce7807.jpeg\" /></p>\r\n<p style=\"padding-left: 200px;\">&nbsp;</p>\r\n<h3><strong>Simplicity when you need it</strong></h3>\r\n<p>In the spirit of balancing innovation with ease of use, <strong>xp.source.simply</strong> was designed to provide straightforward functionality without compromising creative potential. This device focuses on offering a simplified and intuitive interface for source manipulation, making it an ideal starting point for users who need quick, effective results without diving into complex configurations.</p>\r\n<p style=\"padding-left: 160px;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e5fd269bfaeb036b8352016697513d6f.jpeg\" /></p>\r\n<p>Perfect for rapid prototyping or minimalist setups, <strong>xp.source.simply</strong> enables users to create and control sound sources with efficiency and clarity. Its intuitive design ensures that even beginners can harness the power of Xp, while experienced users can appreciate its speed and reliability in their workflows. Whether you&rsquo;re sketching ideas or integrating it into a larger modular system, <strong>xp.source.simply</strong> is there to deliver simplicity when you need it most.</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>Demo Mode for Everyone&rsquo;s Curiosity</strong></h3>\r\n<p>Xp 1.30 now offers a Demo Mode, allowing users to explore its features and workflows before committing to the full version. Setting it up is easy: simply download Xp from <a href=\"https://www.xp4l.com/downloadxp/\" title=\"Download Xp\">Download Xp</a>, install the software, and select Demo Mode upon launch. Have a look on 'Getting start with Xp 1.30' on the youtube channel:</p>\r\n<p>&lt;iframe title=\"YouTube video player\" src=\"https://www.youtube.com/embed/e9A5ILDpNnY?si=NtwqCjeVgsqdYOWG&amp;start=3\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"&gt;&lt;/iframe&gt;</p>\r\n<p>The demo grants access to the full Xp environment, showcasing its core capabilities and allowing users to experiment with sound source creation while exploring how the standalone application seamlessly integrates with Max for Live. It comes with two limitations: after 10 minutes of use, you&rsquo;ll need to restart your Ableton project, and the number of active sound sources is limited to three. These constraints provide a practical introduction to Xp&rsquo;s potential, encouraging users to dive deeper into its full features.</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>Customizing the user Experience</strong></h3>\r\n<p>Xp 1.30 introduces new features to enhance personalization, allowing the standalone application to adapt to different workflows. Navigation preferences can now be adjusted, and keyboard shortcuts configured, enabling a more intuitive and efficient working environment. These customization options streamline access to essential interface functions, ensuring a smoother process for working bother with the application and Ableton.</p>\r\n<p style=\"padding-left: 200px;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/dfe3a5b716f4479d23d220e759e95e9d.png\" /></p>\r\n<p>&nbsp;</p>\r\n<h3><strong>How to Download/Update</strong></h3>\r\n<p>Accessing Xp 1.30 is simple and straightforward. The latest version is now available for download on the official Xp website. To get started:</p>\r\n<p>1.Visit the download page: <a href=\"https://www.xp4l.com/downloadxp/\"><strong>xp4l.com/downloadxp</strong></a>.</p>\r\n<p>2.Download the installer compatible with your operating system.</p>\r\n<p>3.Follow the installation instructions provided to update your current version or set up Xp for the first time.</p>\r\n<p>Ensure that your setup includes the latest versions of Ableton Live,&nbsp; Max for Live, and Ircam Spat&nbsp; to fully enjoy the new features and improved integration offered in Xp 1.30.</p>\r\n<p style=\"padding-left: 280px;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f7f3d80ecb87e8a491110973d5de00ce.jpeg\" /></p>\r\n<p>&nbsp;</p>\r\n<p><em>Xp for Live extends its gratitude to the IRCAM Forum community for their invaluable feedback and ongoing support. Feel free to reach the discussion on <a href=\"https://discussion.forum.ircam.fr/t/xp-1-30/103620\">Ircam Forum dedicated topic. </a></em></p>\r\n<p><em>We encourage you to explore this new update, experiment with its features, and share your experiences.</em></p>\r\n<p><a href=\"https://www.xp4l.com/\">www.xp4l.com</a></p>",
        "topics": [
            {
                "id": 2342,
                "name": "3d audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 900,
                "name": "spatialaudio ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2568,
                "name": "xp130",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1709,
            "forum_user": {
                "id": 1707,
                "user": 1709,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profil4.png",
                "avatar_url": "/media/cache/49/37/4937ce84289a16db6f9d5ea374376dfb.jpg",
                "biography": "Fraction (Eric Raynaud) is a new media, composer and sound artist whose work focuses in particular on immersive and audiovisual experience  design.\n\nHis practice has developed from a background in music composition and spatial sound which led him to put together complete skills in the field of new media art. He now devotes his time writing and producing pieces integrating digital materials of different kinds.  He is particularly interested in forms of experience that have strong interactions between generative art and sonic matter. Combining complex scenography and hybrid digital writing with visuals, sound and physical media, he aims in particular to forge links between contemporary art and digital scope within the frame of radical experiences.\n\nFascinated by sound intensity, energy, ecstasy, and the idea of \"being able to sculpt digital disorder as a raw matter\", he finds in the lexicon of sound spatialization the appropriate field for designing atypical pieces, placing at the center of his writing the immediate physical and emotional experience.",
                "date_modified": "2025-12-29T12:55:11.027970+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fraction",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "xp-130-new-features-refined-experience",
        "pk": 3230,
        "published": true,
        "publish_date": "2025-01-28T06:31:51+01:00"
    },
    {
        "title": "APEIRON - Robert Lisek",
        "description": "The project creates and tests new methods of creating virtual environments, sound spatialisation and intelligent agents. It is interactive game and new type of audio-visual installation through interactions with autonomous AI agents.",
        "content": "<p><span>The project offers an innovative fusion between three domains: 3d games development, music and artificial intelligence. The project creates and tests new methods of creating virtual environments, sound spatialisation and intelligent agents. It is interactive game and new type of audio-visual installation through interactions with autonomous AI agents.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>APEIRON is VR game application created in Unreal Engine and Python.</span></p>\r\n<p><span>It can be presented as demo game and/or installation.</span></p>",
        "topics": [],
        "user": {
            "pk": 21154,
            "forum_user": {
                "id": 21143,
                "user": 21154,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Lisek_portrait_rb_lisek46_2.jpg",
                "avatar_url": "/media/cache/8c/c5/8cc537299368c10d31af34af793faaf4.jpg",
                "biography": "Robert B. Lisek is an artist, mathematician and composer who focuses on systems, networks and processes (computational, biological, social). He is involved in a number of projects focused on media art, creative storytelling and interactive art. Drawing upon post-conceptual art, software art and meta-media, his work intentionally defies categorization. Lisek is a pioneer of art based on Artificial Intelligence and Machine Learning. Lisek is also a composer of contemporary music, author of many projects and scores on the intersection of spectral, stochastic, concret music, musica futurista and noise. Lisek is a founder of Fundamental Research Lab and ACCESS Art Symposium. He is the author of 300 exhibitions and concerts, among others: SIBYL - ZKM Karlsruhe; SIBYL II - IRCAM Center Pompidou; QUANTUM ENIGMA - Harvestworks Center New York and STEIM Amsterdam; TERROR ENGINES - WORM Center Rotterdam, Secure Insecurity - ISEA Istanbul; DEMONS - Venice Biennale (accompanying events); Manifesto vs. Manifesto - Ujazdowski Cartel of Contemporary Art, Warsaw; NEST - ARCO Art Fair, Madrid; Float - Lower Manhattan Cultural Council, NYC; WWAI - Siggraph, Los Angeles.",
                "date_modified": "2025-04-15T22:29:55.560395+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lisek",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3282,
                    "user": 21154,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "apeiron",
        "pk": 2045,
        "published": true,
        "publish_date": "2023-02-08T17:45:24+01:00"
    },
    {
        "title": "Thermophones 5G - Jacques Rémus",
        "description": "Résumé pour la communication du Forum Ircam 2024",
        "content": "<p><em><span></span></em></p>\r\n<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Jacques R&eacute;mus</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/REMUS/\">Biographie</a></p>\r\n<p><strong><br />Thermophones 5G.</strong></p>\r\n<p style=\"text-align: justify;\">Depuis plusieurs ann&eacute;es j&rsquo;ai eu la possibilit&eacute; de pr&eacute;senter lors de divers forums de l&rsquo;Ircam les particularit&eacute;s de mon travail qui porte sur les sons et la musique produits par des machines m&eacute;caniques t&eacute;l&eacute;command&eacute;es ou robotis&eacute;es et utilisant pour leur &eacute;tude et leurs fonctionnements divers logiciels d&eacute;velopp&eacute;s par l&rsquo;Ircam.</p>\r\n<p style=\"text-align: justify;\">Mes recherches sur les sons dus au ph&eacute;nom&egrave;ne de l&rsquo;instabilit&eacute; thermo-acoustique m&rsquo;ont amen&eacute; &agrave; explorer les divers proc&eacute;d&eacute;s &eacute;tudi&eacute;s par les scientifiques sp&eacute;cialistes de cette science. Ainsi sont n&eacute;es des d&eacute;monstrations, des installations et des concerts automatiques dont j&rsquo;ai pu montrer des &eacute;l&eacute;ments &agrave; ce forum.</p>\r\n<p style=\"text-align: justify;\">&nbsp;Les Thermophones de la 5<sup>&egrave;me</sup>&nbsp;g&eacute;n&eacute;ration sont le r&eacute;sultat d&rsquo;une importante &eacute;volution de la ma&icirc;trise musicale de ces ph&eacute;nom&egrave;nes. Une aide importante du Minist&egrave;re de la Culture, suite &agrave; la s&eacute;lection de mon projet &laquo;&nbsp;Ch&oelig;urs et Thermophones&nbsp;&raquo;, lors de l&rsquo;&nbsp;&laquo;&nbsp;A.P.I. Mondes Nouveaux&nbsp;&raquo; lanc&eacute;e en 2021 par la pr&eacute;sidence de la R&eacute;publique, a permis de construire un jeu d&rsquo;orgue mobile d&rsquo;une quarantaine de tuyaux de 0,5m &agrave; 3m de long et de pr&eacute;parer des concerts avec un ensemble de chanteurs.&nbsp;</p>\r\n<p style=\"text-align: justify;\">La conf&eacute;rence pr&eacute;sentera rapidement les diverses &eacute;tapes pr&eacute;c&eacute;dentes, puis les &eacute;tapes de la construction du jeu d&rsquo;orgue, de ses principes de fonctionnement, de son installation spatialis&eacute;e coupl&eacute;e avec un carillon tubulaire et des extraits de concerts (priv&eacute;s) r&eacute;alis&eacute;s en octobre 2023 avec les chanteurs du Ch&oelig;ur de Chambre de Paris.</p>\r\n<p style=\"text-align: justify;\">Il sera ensuite expliqu&eacute; les divers projets pr&eacute;vus pour &laquo;&nbsp;Ch&oelig;urs et Thermophones&nbsp;&raquo;, les modifications qu&rsquo;il est pr&eacute;vu d&rsquo;apporter au syst&egrave;me, et les perfectionnements qui &eacute;tabliront les base des Thermophones 6G&nbsp;!</p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong><span>&nbsp;</span></p>",
        "topics": [
            {
                "id": 1763,
                "name": "Choirs and Thermophones",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1762,
                "name": "mécamusique",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1759,
                "name": "musical machines,",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1760,
                "name": "organ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1761,
                "name": "thermoacoustic",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 627,
            "forum_user": {
                "id": 627,
                "user": 627,
                "first_name": "Jacques",
                "last_name": "Rémus",
                "avatar": "https://forum.ircam.fr/media/avatars/Jacques_Remus_photo_Marine_Lale_600x600_DSC_7184.png",
                "avatar_url": "/media/cache/87/d5/87d5a3210f1b68fa331488c355189592.jpg",
                "biography": "Jacques Rémus\n\nBiologiste à l'origine (agronome et chercheur en aquaculture), Jacques Rémus a choisi à la fin des années 70, de se consacrer à la musique et à l'exploration de différentes formes de création. Saxophoniste, il a participé à la fondation du groupe Urban-Sax. Il apparaît également dans de nombreux concerts allant de la musique expérimentale (Alan Sylva, Steve Lacy) à la musique de rue (Bread and Puppet). \n\nAprès des études en Conservatoires, G.R.M. et G.M.E.B., il a écrit des musiques pour la danse, le théâtre, le \"spectacles totaux\", la télévision et le cinéma. Il est avant tout l'auteur d'installations et de spectacles mettant en scène des sculptures sonores et des machines musicales comme \"Bombyx\", le \"Double Quatuor à Cordes\", \"Concertomatique\", \"Léon et le chant des mains\", les \"Carillons\" N ° 1, 2 et 3, : « l'Orchestre des Machines à Laver » ainsi que ceux présentés au Musée des Arts Forains (Paris).\n\nDepuis 2014, son travail s'est concentré sur le développement des «Thermophones». La construction d’un orgue mobile de 40 Thermophones de 5ème génération a permis de créer en 2023 le spectacle-concert « Chœurs et Thermophones »",
                "date_modified": "2025-12-05T12:05:16.942583+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 69,
                        "forum_user": 627,
                        "date_start": "2025-12-05",
                        "date_end": "2026-12-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 344,
                                "membership": 69
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "REMUS",
            "first_name": "Jacques",
            "last_name": "Rémus",
            "bookmarks": []
        },
        "slug": "5g-thermophones",
        "pk": 2724,
        "published": true,
        "publish_date": "2024-02-13T11:56:08+01:00"
    },
    {
        "title": "Sound design – an artistic / scienfic discipline in its own right",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;As well as design, sound design can be seen as a &laquo;&nbsp;discipline of study in its own right&nbsp;&raquo; (Archer, 1979) that is positionned somewhere between art and science. Among others, the study of this specific discipline addresses the issue of the status and the role of the sound designer, and furthermore the extended issue of the way sound design projects articulate artistic know-how and scientific knowledge. The talk will present three emblematic sound design works &ndash;&nbsp;completed or in progress wihtin the Ircam STMS Lab Sound Perception &amp; Design group &ndash; that would give different insignts on how  the art/science articulation may be implemented and what it did &ndash; or will &ndash; produce in this particular context.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">As well as design, sound design can be seen as a &laquo;&nbsp;discipline of study in its own right&nbsp;&raquo; (Archer, 1979) that is positionned somewhere between art and science. Among others, the study of this specific discipline addresses the issue of the status and the role of the sound designer, and furthermore the extended issue of the way sound design projects articulate artistic know-how and scientific knowledge. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;As well as design, sound design can be seen as a &laquo;&nbsp;discipline of study in its own right&nbsp;&raquo; (Archer, 1979) that is positionned somewhere between art and science. Among others, the study of this specific discipline addresses the issue of the status and the role of the sound designer, and furthermore the extended issue of the way sound design projects articulate artistic know-how and scientific knowledge. The talk will present three emblematic sound design works &ndash;&nbsp;completed or in progress wihtin the Ircam STMS Lab Sound Perception &amp; Design group &ndash; that would give different insignts on how  the art/science articulation may be implemented and what it did &ndash; or will &ndash; produce in this particular context.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">The talk will present three emblematic sound design works &ndash;&nbsp;completed or in progress wihtin the Ircam STMS Lab Sound Perception &amp; Design group &ndash; that would give different insignts on how the art/science articulation may be implemented and what it did &ndash; or will &ndash; produce in this particular context.</span></p>",
        "topics": [],
        "user": {
            "pk": 115,
            "forum_user": {
                "id": 115,
                "user": 115,
                "first_name": "Nicolas",
                "last_name": "Misdariis",
                "avatar": "https://forum.ircam.fr/media/avatars/myPhoto_CR.JPG",
                "avatar_url": "/media/cache/2c/cd/2ccdde6a292f0a0054c61094af3111b8.jpg",
                "biography": "I am a research director, head of Ircam STMS Lab / Sound Perception & Design group, and presently deputy-head of the Ircam STMS Lab. I am graduated from an engineering school specialized in mechanics (1993), I got my Master thesis on applied acoustics and my PhD on synthesis/reproduction/perception of musical and environmental sounds. I defended, some years ago, my HDR (Habilitation to Direct Research) on the topic of Sciences of Sound Design. I have been working at Ircam as a research fellow since 1995 and contributed, in 1999, to the introduction of sound design in the Institute. During that time, I developed research works and industrial applications related to sound synthesis and reproduction, environmental sound and soundscape perception, auditory display, human-machine interfaces (HMI), interactive sonification and sound design. Since 2010, I am also a regular lecturer in the Sound Design Master at the High School of Art and Design in Le Mans (ESAD TALM, Le Mans).",
                "date_modified": "2026-03-02T12:04:38.503876+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 259,
                        "forum_user": 115,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "misdarii",
            "first_name": "Nicolas",
            "last_name": "Misdariis",
            "bookmarks": []
        },
        "slug": "sound-design-an-artistic-scienfic-discipline-in-its-own-right",
        "pk": 1341,
        "published": true,
        "publish_date": "2022-09-13T16:53:55+02:00"
    },
    {
        "title": "Unknowable Certainty: lullaby to put myself to rest by Cyan D'Anjou, Luisa do Amaral, Sunghoon Song",
        "description": "Unknowable Certainty is an immersive audiovisual performance showing the experience of being caught in the past–searching for a numerical value that might explain the moments leading up to a present confrontation with the “end”–as if to balance a debt.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4e399c4225dbe750318b1533b062ffc2.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"729\" height=\"486\" /></p>\r\n<p></p>\r\n<p>Presented by :&nbsp;Cyan D'Anjou, Luisa do Amaral, Sunghoon Song</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/cyandanjou/\" target=\"_blank\">Biography</a></p>\r\n<p>In the face of an inevitable End emerges an uncompromising realization: the expansion and weight of the past as the future narrows to an unknowable but certain finite point. To extend time, a cycle of reaching into a database of recollection hopes to yield validation for the cumulative result&ndash;the emotional landscape of now. Despite knowing our database comes short of reality, why, nonetheless, do we aim to quantify our lived experiences?</p>\r\n<p>Unknowable Certainty is a collaborative project between artists Cyan D&rsquo;Anjou, Sunghoon Song, and computational social scientist Luisa do Amaral. Their interdisciplinary approaches, characterized by the deliberate convergence and divergence of inquiries, culminate in the physical representation of a shared reflective process&ndash;a candid exposition of self-analysis. In hopes to allow the cycle to rest, they approach work from an existentialist perspective, aiming to invite compassion for sentiment and the experience of being to exist validly, free of explanation.<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a3826d80896683c2ab4865de92169ebb.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"513\" height=\"513\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5644dff6714c86aff2488272835ef421.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"513\" height=\"513\" /></p>\r\n<p>In his discussion of subversions of rationality, Norwegian philosopher Jon Elster describes the moral and intellectual fallacies that humans are guilty of, when dealing with mental or social states that are &ldquo;by-products of actions undertaken for other ends&rdquo; (Elster, 1983). These are states that cannot be brought about intentionally, nor can they be &ldquo;explained away&rdquo; easily by connecting the outcomes to specific actions.</p>\r\n<p>To tell the story of failing to capture the full depth of lived emotions within societally valued computational frameworks of rationality, Unknowable Certainty is an audiovisual performance showing the experience of being caught in the past&ndash;searching for a numerical value that might explain the moments leading up to a present confrontation with the &ldquo;end&rdquo;&ndash;as if to balance a scale. Told through the story of a person aging and reflecting on her past, the work portrays the hope that finding this variable might prove that human experiences have a logical explanation, and is thus the future becomes solvable and less uncertain.</p>\r\n<p>In a sociological sense, Unknowable Certainty was born from theories that explained social action through the lens of mathematical transaction, but in trying to map human experiences through the language of computation we come short of accounting for the complexity of social reality. In every mathematical model, exists a built-in margin for error that accounts for the omitted, sentimental, factors that go unobserved. Factors that would invalidate our results. And thus, without an answer, we are caught in a endless search for an unknowable variable in the equation for certainty.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f06e72680f21529a8a5eb94599be343e.jpg\" width=\"1309\" height=\"737\" /></p>\r\n<p>The conceptual exploration behind Unknowable Certainty&nbsp;ignited out of the artists&rsquo; mutual experience of desiring to elude the inherently human feeling of loss by seeking understanding through rationalization and finding explanations unreachable to the human eye through analysis. The team bonded over shared experiences of feeling caught in a cycle of going back and forth through analyses of past moments, exchanges, and memories, wishing there were a computational resolution or equation to solve for emotion. They combine their multidisciplinary backgrounds and dispositions to craft frameworks that would allow them to reinterpret these situations.</p>\r\n<p>Addressed in a distinctive formats immersive visual performance format, the performance and film probe different theories of how the human mind processes and interprets life experiences, from philosophical, sociological, psychological and cognitive perspectives to imagine a scenario in which the past can be plotted as points in a graph. If we were able to, what would we gain in ability to process uncertainty? What would we lose? In order to invite (or rather<span>&nbsp;</span><em>entice</em><span>&nbsp;</span>audiences to consider) compassion for human affect to coexist with out technological pursuits, the performance of Unknowable Certainty depicts two performers representing mind (data processing, past, logic) and body (present, phenomenology, affect) who initially act separately eventually coming together to feel grounded in the current moment amongst the audience. To the extent that this study investigates the obsessive search for meaning or purpose in uncertain times, it starts from ourselves and our shared humanity.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1e664fb6ddc75d8709b3554927d64cb3.jpg\" width=\"1139\" height=\"641\" /></p>\r\n<p>The main visual elements of Unknowable Certainty, the graphs projected onto the floor, are referenced from early explorations looking for computational representations of the human mind and its perceived rationality. But the independent, dependent variables, and the points plotted on them are intentionally emotional, subjective, rendering this graph essentially unusable to logical standards. Yet, in the performance and exhibition settings, placed in a space that allows diverse forms of expression&ndash;sound, movement, film&ndash;to coexist, this dysfunctional graphs still communicates it&rsquo;s message and intention clearly to it&rsquo;s feeling human audience. The combined elements of a visual film, live choreography, lullaby, sound design, and the durational depiction of a graphical analysis converge to illustrate the process of breaking from reliving in cycles of past and extend an invitation for sentiment existing beyond understanding.</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 38567,
            "forum_user": {
                "id": 38516,
                "user": 38567,
                "first_name": "Cyan",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/ACC_Cyan.jpg",
                "avatar_url": "/media/cache/c5/f2/c5f2f639afd0932d53b13b2ddc85ef89.jpg",
                "biography": "Cyan D’Anjou (b. 2000, Netherlands) is a speculative sculpture artist and media creator with a background in technology design and innovation ethics from Stanford University. Prior to joining RCA’s Information Experience Design program, she created tactile installations around AI’s growing presence in our everyday and the subsequent cultural and psychological changes that follow the normalisation of data capitalism. Her works have been exhibited internationally at venues including the High Museum of Art, SOMArts, Saatchi Gallery, and at Sonsbeek ‘16. Currently, her work takes on a speculative quality as she envisions the potential impacts of current societal advancements, which she often expresses in the form of multidisciplinary sculptures, videos, and installations. Cyan is particularly interested in investigating behavioural shifts as the divide between the virtual and physical worlds becomes more blurred. A central question in her work is, “how can human expression and identity be elevated in a steadily more automated world?”",
                "date_modified": "2025-02-23T22:32:05.779863+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cyandanjou",
            "first_name": "Cyan",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "unknowable-certainty-lullaby-to-put-myself-to-rest-cyan-danjou-luisa-do-amaral-sunghoon-song",
        "pk": 3265,
        "published": true,
        "publish_date": "2025-02-10T12:24:30+01:00"
    },
    {
        "title": "Boulez100: A Short Biography",
        "description": "A keynote by Grégoire Lorieux, 27 Sept. 2025, Liepaja (Latvia)",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p>Pierre Boulez (1925&ndash;2016) was a central figure in contemporary music. As both a composer and conductor, he pushed the boundaries of modern music and technological integration. Boulez gained international recognition with works like <em>Le Marteau sans ma&icirc;tre</em>, combining serialism with timbral innovation. Founder of IRCAM in 1977, he promoted collaboration between music, science, and technology. His conducting career included leadership of major orchestras, leaving a lasting impact on music through his rigorous vision and creative audacity.</p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/photo_1_-_pierre_boulez,_février_2009_à_salzbourg_&copy;_le_regard_de_james,_jean_radel.jpg\" alt=\"Pierre Boulez, f&eacute;vrier 2009 &agrave; Salzbourg &copy; Le regard de James, Jean Radel\" width=\"500\" height=\"332\" /></p>\r\n<p><sub>Pierre Boulez, f&eacute;vrier 2009 &agrave; Salzbourg &copy; Le regard de James, Jean Radel</sub></p>\r\n<p><sub></sub></p>\r\n<p><sub><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></sub></p>",
        "topics": [],
        "user": {
            "pk": 3044,
            "forum_user": {
                "id": 3042,
                "user": 3044,
                "first_name": "Gregoire",
                "last_name": "Lorieux",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/cd7913e7acfc03b53fbc5d9c30da67ce?s=120&d=retro",
                "biography": "Grégoire Lorieux is a composer, artistic director, and computer music designer, teaching at IRCAM. After studying early music and completing a master’s thesis on Kaija Saariaho, he studied composition with Philippe Leroux and at the Conservatoire de Paris, while joining IRCAM as a technology professor. In 2012, he took part in SPEAP at Sciences Po Paris with Bruno Latour, exploring connections between art, ecology, and social engagement. Active in education, he has led numerous projects combining creation and cultural outreach, such as IRCAM’s Ateliers de la Création and Paysages Composés with Ensemble Ars Nova and Quatuor Diotima. From 2013 to 2024, he was co-director of Ensemble Itinéraire. He taught electroacoustic composition at the Paris Conservatoire from 2019 to 2024. His musical language integrates electronics and French spectralism, exploring various formats from installations to concert works. In 2022, he founded Mondes Sonores, an open-air festival linking music and ecology.",
                "date_modified": "2026-02-27T15:38:40.219400+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 354,
                        "forum_user": 3042,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 25,
                                "membership": 354
                            },
                            {
                                "id": 599,
                                "membership": 354
                            },
                            {
                                "id": 655,
                                "membership": 354
                            },
                            {
                                "id": 781,
                                "membership": 354
                            },
                            {
                                "id": 917,
                                "membership": 354
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "lorieux",
            "first_name": "Gregoire",
            "last_name": "Lorieux",
            "bookmarks": []
        },
        "slug": "boulez100-a-short-biography",
        "pk": 3561,
        "published": true,
        "publish_date": "2025-07-17T11:39:28+02:00"
    },
    {
        "title": "Breathing.Systems: Collective & Movement-based Spatial Sound Performance",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Breathing.Systems is a spatial sound performance using a wireless, low latency multichannel sound system; with speakers worn on the bodies of performers, spatialising live singing through choreographed movement. By using spatial relationships and movement of bodies to spatialise a voice and sound, Breathing.Systems foregrounds the body and relational quality of sound, positing a relational approach to spatialisation. Nik Rawlings invites IRCAM Forum members to become co-performers in this workshop, with participants wearing the Breathing.Systems speakers and collectively creating short, choreographed sound spatialisations in a combined listening and performing exercise.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:893,&quot;3&quot;:{&quot;1&quot;:0},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0}\">Breathing.Systems is a spatial sound performance using a wireless, low latency multichannel sound system; with speakers worn on the bodies of performers, spatialising live singing through choreographed movement. By using spatial relationships and movement of bodies to spatialise a voice and sound, Breathing.Systems foregrounds the body and relational quality of sound, positing a relational approach to spatialisation. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Breathing.Systems is a spatial sound performance using a wireless, low latency multichannel sound system; with speakers worn on the bodies of performers, spatialising live singing through choreographed movement. By using spatial relationships and movement of bodies to spatialise a voice and sound, Breathing.Systems foregrounds the body and relational quality of sound, positing a relational approach to spatialisation. Nik Rawlings invites IRCAM Forum members to become co-performers in this workshop, with participants wearing the Breathing.Systems speakers and collectively creating short, choreographed sound spatialisations in a combined listening and performing exercise.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:893,&quot;3&quot;:{&quot;1&quot;:0},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0}\">Nik Rawlings invites IRCAM Forum members to become co-performers in this workshop, with participants wearing the Breathing.Systems speakers and collectively creating short, choreographed sound spatialisations in a combined listening and performing exercise.</span></p>",
        "topics": [],
        "user": {
            "pk": 22515,
            "forum_user": {
                "id": 22503,
                "user": 22515,
                "first_name": "Nik",
                "last_name": "Rawlings",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/fb94f0d83ed455820544f9dc41bc70b8?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-26T17:25:15.795828+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nikrawlings",
            "first_name": "Nik",
            "last_name": "Rawlings",
            "bookmarks": []
        },
        "slug": "breathingsystems-collective-movement-based-spatial-sound-performance",
        "pk": 1346,
        "published": true,
        "publish_date": "2022-09-13T17:18:20+02:00"
    },
    {
        "title": "New Tuning Theory/Practice",
        "description": "New Tuning Theory/Practice",
        "content": "<p><span style=\"font-size: 1.125rem;\"><img style=\"font-size: 1.125rem;\" src=\"/media/uploads/user/be1a70ab15653d65a7293bb9983012ce.png\" alt=\"\" width=\"1384\" height=\"248\" />This function was used as an intentional investigation into the 12 Tone system and some of its grumbles and irregularities. Inital interpretation would suggest a duality between 12nth and 13nth nature of the main structure of the system has made it unobvious how to apply formats that conform to or respect a harmonic series in a fashion that can smoothly repeat within it's confines without anomalies such as the commas.</span></p>\r\n<p><span style=\"font-size: 1.125rem;\">The rectilinear function y = .0833333x &nbsp;... quite simply a twelfth, forms a series of ratios which when picked from the second octave series ie. 1.083333- 2 and applied from note/position value -1 to the n value of starting point of an octave cycle and cycled continuously through consecutive octaves will preserve some ratios of the harmonic series. Green line is 12&radic;2. </span></p>\r\n<p><img src=\"/media/uploads/user/ab51403bc1a11975f4b869e011614d01.png\" alt=\"\" width=\"918\" height=\"290\" /></p>\r\n<p><span style=\"font-size: 1.125rem;\">Logic is 1/12 = .083333333 which is infact 1 of 12 so =1; 2/12 = .016666666 which is 2 of 12 so = 2 that takes care of cycle up to 12 or 12/12. Starting 2nd cycle now at 13 derived &nbsp;from the product 12*13/12 = 13 and 14 is derived from product 12*ratio 7/6, 15 from factor 12*ratio 5/4 and so on which are the ratios that are from the second period </span><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">of the function y=.0833333*x. At the point of 'doubling' the cycle starts again.... so 24*13/12=26, 24*7/6=28. This offsetting of the cycle to the point of origin back by one semitone irons out the comma or non-system ratio fraction. There are other lives in the system one of them being the function y=0.07692307692*x which is the polar twin of y=0.083333333 separated by a semitone. These will be available to explore in next update to this article. To follow this and comment please do so either here or at 12Fingers.org where fresh material will be continually added.</span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">Hear some pretty pictures</span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\"><img src=\"/media/uploads/user/1484f189da359456964389ae568dd48d.png\" alt=\"\" width=\"1399\" height=\"183\" /></span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">144 &nbsp;is the common dividend and horizontal period for all elements internalised in instances of the system re-occurring vertically at level 13 then again 26 etc.. As an analogy 0-144 are the nodes of the first instance.</span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\"><img src=\"/media/uploads/user/e766d7c2aeac106e323438ee02f32669.png\" alt=\"\" width=\"1341\" height=\"608\" /></span></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 286,
                "name": "12tet",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 285,
                "name": "Just",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 284,
                "name": "Pythagorean",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 283,
                "name": "Theory",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 191,
                "name": "Tuning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17661,
            "forum_user": {
                "id": 17657,
                "user": 17661,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7356ec9886128a3b915cfe90fc832be6?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-11-18T10:39:32.702791+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "flartec",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "new-tuning-theorypractice-2",
        "pk": 442,
        "published": false,
        "publish_date": "2020-01-21T07:09:22+01:00"
    },
    {
        "title": "The Sacred Buzz: Exploring Bee Spiritual Meaning Across Traditions and Their Modern Relevance",
        "description": "For thousands of years, humans have regarded bees with a mixture of reverence, fascination, and respect that transcends their practical importance as pollinators and honey producers. Across diverse cultures and spiritual traditions, the bee spiritual meaning encompasses themes of community wisdom, divine connection, and natural harmony. This rich symbolic heritage carries surprising relevance in our modern world, where both traditional wisdom and scientific understanding highlight the bee's crucial role in our ecosystem and wellbeing.\n",
        "content": "<h2><strong>Ancient Reverence: Bees as Divine Messengers</strong></h2>\n<p><span style=\"\">The earliest documented bee spiritual meaning appears in Ancient Egyptian culture, where bees symbolized royalty, rebirth, and divine tears. Pharaohs incorporated bee imagery into royal titles, and honey was considered a sacred substance worthy of offering to the gods. This elevated status wasn't unique to Egypt&mdash;across Mediterranean civilizations, bees held divine associations.</span></p>\n<p><span style=\"\">In Greek mythology, bees connected to prophecy, being associated with the Oracle of Delphi and several deities. Priestesses of certain temples were called \"melissae\" (bees), suggesting their role as divine messengers. This symbolic connection between bees and spiritual communication appears remarkably consistent across cultures separated by vast distances.</span></p>\n<p><span style=\"\">\"The universal reverence for bees across ancient civilizations suggests an intuitive recognition of their importance beyond mere food production,\" explains Dr. Elizabeth Shepherd, cultural anthropologist specializing in natural symbolism. \"These societies recognized something profoundly significant in bee communities&mdash;a wisdom that transcended human understanding.\"</span></p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c7fc0b640548cb6b19a5402da85bd3eb.png\"></p>\n<h2><strong>Organized Community: The Social Metaphor</strong></h2>\n<p><span style=\"\">Across traditions, the bee's complex social structure offered powerful metaphors for ideal human society. Medieval Christian texts praised bee colonies as models of divine order, while Confucian philosophy highlighted their exemplary social organization. Indigenous North American traditions viewed bee communities as demonstrations of how individuals contribute to collective prosperity through specialized roles and cooperation.</span></p>\n<p><span style=\"\">This aspect of bee spiritual meaning&mdash;representing harmonious community&mdash;remains particularly relevant today as human societies grapple with questions of sustainability, cooperation, and social organization in increasingly complex environments.</span></p>\n<h2><strong>Healing Wisdom: The Therapeutic Connection</strong></h2>\n<p><span style=\"\">Perhaps most fascinating is how spiritual traditions anticipated scientific discoveries about bees' healing contributions. Many cultures associated bees with medicinal wisdom long before modern research identified the therapeutic properties of honey, propolis, and other bee products.</span></p>\n<p><span style=\"\">Today, this ancient intuition finds scientific validation in products like manuka honey. Understanding <a href=\"https://manukora.com/blogs/honey-guide/what-do-the-different-mgo-grades-mean\">manuka honey grades</a> helps consumers identify products with genuine therapeutic potential. These grading systems&mdash;measuring factors like methylglyoxal concentration&mdash;provide standardized indicators of antibacterial potency, allowing people to select appropriate strengths for specific health applications.</span></p>\n<p><span style=\"\">The most common manuka honey grades include:</span></p>\n<ul>\n<li style=\"\"><span style=\"\">UMF 5-10 or MGO 30-100: Entry-level therapeutic benefits</span></li>\n<li style=\"\"><span style=\"\">UMF 10-15 or MGO 100-250: Moderate medicinal activity</span></li>\n<li style=\"\"><span style=\"\">UMF 15+ or MGO 250+: High therapeutic potency</span></li>\n</ul>\n<p><span style=\"\">Indigenous Māori healers recognized the special properties of honey from manuka flowers centuries before laboratory analysis identified its unique compounds&mdash;demonstrating how traditional ecological knowledge often precedes scientific confirmation.</span></p>\n<h2><strong>Transformational Wisdom: Alchemy and Change</strong></h2>\n<p><span style=\"\">Across spiritual traditions, <a href=\"https://manukora.com/blogs/honey-guide/what-do-bees-symbolize\">bee spiritual meaning</a> frequently connects to transformation and alchemy. The bee's ability to transform nectar into honey served as a powerful metaphor for spiritual transformation&mdash;converting ordinary experience into wisdom. In Celtic tradition, bees symbolized hidden knowledge and the transformation of mundane substances into sacred ones.</span></p>\n<p><span style=\"\">This transformative symbolism remains relevant in contemporary spiritual practices, where bee imagery represents personal growth, life transitions, and the distillation of experience into wisdom. Modern wellness practitioners often incorporate this aspect of bee symbolism into transformational coaching and personal development work.</span></p>\n<h2><strong>Ecological Prophets: Bees as Environmental Indicators</strong></h2>\n<p><span style=\"\">Perhaps the most profound aspect of bee spiritual meaning for our modern era is their role as indicators of ecological health. Across traditions, bees symbolized harmony with the natural world and environmental balance. Today, as bee populations face unprecedented threats, this symbolic connection has taken on urgent practical significance.</span></p>\n<p><span style=\"\">\"The health of bee populations directly reflects the health of our ecosystems,\" notes environmental scientist Dr. James Morgan. \"In this sense, ancient traditions that viewed bees as messengers between worlds were remarkably prescient&mdash;bees truly are messengers about the state of our relationship with the natural world.\"</span></p>\n<p><span style=\"\">This awareness has spawned renewed interest in sustainable beekeeping, habitat preservation, and natural approaches to bee health&mdash;movements that honor both the practical and symbolic importance of these remarkable creatures.</span></p>\n<h2><strong>Fertility and Abundance: Life-Giving Symbols</strong></h2>\n<p><span style=\"\">Throughout history, bee spiritual meaning has been strongly associated with fertility, abundance, and life-giving energy. This connection appears across European, African, and Asian traditions, where bees symbolized prosperity and the generative power of nature. Honey was often used in fertility rituals and ceremonies celebrating abundance.</span></p>\n<p><span style=\"\">This symbolic association reflects the bee's essential role in plant reproduction through pollination&mdash;a process vital for agricultural abundance. Modern understanding of bees' contribution to food security gives new relevance to these ancient associations between bees and abundance.</span></p>\n<h2><strong>Sweetness of Wisdom: Honey as Metaphor</strong></h2>\n<p><span style=\"\">Across spiritual traditions, honey serves as a metaphor for wisdom, divine truth, and the sweetness of spiritual understanding. From Biblical references to \"words sweeter than honey\" to Hindu texts comparing divine bliss to honey, this metaphorical connection appears consistently across diverse traditions.</span></p>\n<p><span style=\"\">This aspect of bee symbolism carries significant relevance in contemporary mindfulness and contemplative practices, where the patient, present-moment awareness of bees collecting nectar offers a powerful model for gathering wisdom from life experiences.</span></p>\n<h2><strong>Contemporary Revival: Ancient Wisdom Meets Modern Understanding</strong></h2>\n<p><span style=\"\">Today's renewed interest in natural healing approaches has sparked fresh appreciation for both manuka honey grades and traditional bee spiritual meaning. Contemporary wellness practitioners increasingly integrate both scientific understanding of bee products and the symbolic wisdom embedded in cultural traditions surrounding bees.</span></p>\n<p><span style=\"\">This integration represents a holistic approach to health and wellbeing that honors both empirical evidence and traditional ecological knowledge&mdash;creating a more complete understanding that values both measurable properties and meaningful symbolism.</span></p>\n<p><span style=\"\">The enduring spiritual significance of bees across human cultures reminds us that some of our most important relationships with the natural world operate on multiple levels&mdash;practical, ecological, and symbolic. By honoring both the scientific understanding of bees' contributions and the rich heritage of bee spiritual meaning across traditions, we gain a more complete appreciation of these extraordinary creatures and their continuing relevance to human wellbeing.</span></p>\n<p><span style=\"\">In an age seeking reconnection with natural wisdom, the sacred buzz of bees offers profound guidance&mdash;wisdom as sweet and nourishing as the honey they produce.</span></p>",
        "topics": [],
        "user": {
            "pk": 105429,
            "forum_user": {
                "id": 105297,
                "user": 105429,
                "first_name": "antonio",
                "last_name": "miller",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/dbde4120d2c615fb10b36c8f0ca97a0f?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-03-07T13:23:51.392627+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "antoniobmiller56",
            "first_name": "antonio",
            "last_name": "miller",
            "bookmarks": []
        },
        "slug": "the-sacred-buzz-exploring-bee-spiritual-meaning-across-traditions-and-their-modern-relevance",
        "pk": 3327,
        "published": false,
        "publish_date": "2025-03-06T07:51:22.734221+01:00"
    },
    {
        "title": "Analysis Synthesis Tools by Pierre Guillot",
        "description": "In this talk, Pierre Guillot will give a brief introduction to the historical heritage and artistic and research context in which ASAP and Partiels were developed, highlighting the challenges and innovative nature of these projects. ASAP is a set of audio plug-ins that allows creatively transforming sound. You are invited to play with the sound representation and the synthesis parameters to generate new sounds. The plug-ins can also be used to correct the defaults of the sound and to improve audio rendering. Thanks to the ARA2 integration, the spectral transformations are integrated into your editing workflow. Partiels is an audio analysis application and collection of plug-ins that lets you analyze one or more audio files using Vamp plug-ins, load data files, visualize, edit, organize and export results as images or text files that can be used in other applications such as Max, Pure Data, Open Music and more. In parallel with Partiels, a set of analyses are ported to Ircam's Vamp plug-ins: SuperVP, IrcamBeat, IrcamDescriptor, PM2, FCN, Crepe, Whisper. These plug-ins enable FFT, LPC, transient, fundamental, formant, tempo, STT, and other analyses.",
        "content": "<p style=\"font-weight: 400;\">In this talk, Pierre Guillot will give a brief introduction to the historical heritage and artistic and research context in which ASAP and Partiels were developed, highlighting the challenges and innovative nature of these projects. ASAP is a set of audio plug-ins that allows creatively transforming sound. You are invited to play with the sound representation and the synthesis parameters to generate new sounds. The plug-ins can also be used to correct the defaults of the sound and to improve audio rendering. Thanks to the ARA2 integration, the spectral transformations are integrated into your editing workflow. Partiels is an audio analysis application and collection of plug-ins that lets you analyze one or more audio files using Vamp plug-ins, load data files, visualize, edit, organize and export results as images or text files that can be used in other applications such as Max, Pure Data, Open Music and more. In parallel with Partiels, a set of analyses are ported to Ircam's Vamp plug-ins: SuperVP, IrcamBeat, IrcamDescriptor, PM2, FCN, Crepe, Whisper. These plug-ins enable FFT, LPC, transient, fundamental, formant, tempo, STT, and other analyses.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "analysis-synthesis-tools-by-pierre-guillot",
        "pk": 3072,
        "published": true,
        "publish_date": "2024-10-24T17:00:14+02:00"
    },
    {
        "title": "米兰体育",
        "description": "米兰体育(https://milan-sports.com)是一家专注于全球体育赛事互动的综合娱乐平台,秉承专业、稳定与创新的发展理念,为广大用户提供高品质的线上娱乐体验。",
        "content": "<h2>米兰平台概述</h2>\n<p><a href=\"https://milan-sports.com/\">米兰体育</a> ( <a href=\"https://milan-sports.com/\">https://milan-sports.com</a> )是一家专注于全球赛事互动与数字娱乐服务的综合性体育娱乐平台。平台以专业化运营体系为基础,融合先进技术架构与稳定系统支持,致力于为用户打造安全、流畅、高品质的在线体验。依托成熟的数据整合能力与丰富的行业资源,米兰体育不断完善产品结构,强化服务体系建设,持续提升品牌综合实力。</p>\n<p><img alt=\"米兰体育官方网站\" src=\"https://static.ytyh.org/outlink/e223ea7acff52a63dbb6ffe9.jpg\"></p>\n<h2>赛事资源布局</h2>\n<p>米兰平台聚焦全球主流体育赛事,涵盖足球、篮球、网球、电竞等热门项目,整合国际联赛及多层级赛事内容。赛事信息实时更新,数据同步精准高效,多样化互动玩法丰富用户参与体验。通过优化数据接口与系统响应速度,确保高峰时段依然稳定流畅,为用户提供连贯顺畅的使用体验。</p>\n<h2>产品矩阵构建</h2>\n<p>米兰体育构建了多元化产品生态体系,整合体育投注、真人娱乐、电子游戏、棋牌互动及彩票内容,形成完整的娱乐矩阵结构。各板块之间无缝衔接,内容布局清晰合理,满足不同用户群体的多样化需求。平台持续更新热门内容与创新玩法,增强整体活跃度与用户粘性。</p>\n<h2>技术保障体系</h2>\n<p>平台采用成熟稳定的系统架构与多重加密技术,保障数据安全与账户信息保护。通过完善的风控机制与后台监测系统,实现运营环境的规范化管理。服务器部署优化升级,保障访问速度与系统稳定性,确保用户在不同终端均可获得顺畅体验。</p>\n<h2>用户服务体系</h2>\n<p>米兰体育重视用户体验与服务质量,页面设计简洁直观,操作流程便捷高效。多终端适配技术支持网页端与移动端无缝切换,满足多场景使用需求。客服团队提供在线支持服务,响应及时、处理高效,持续优化用户满意度。平台定期推出多样化活动与专属福利,增强互动参与感。</p>\n<h2>发展愿景</h2>\n<p>立足数字体育娱乐行业发展趋势,米兰体育坚持规范运营与稳健拓展策略,持续优化产品结构与技术能力。未来将进一步强化品牌建设,深化内容生态布局,打造更加智能化、专业化、可持续发展的综合体育娱乐平台,为用户创造更具价值的长期体验。</p>\n<p>#米兰体育 #米兰体育官网 #米兰体育APP #米兰体育下载 #米兰APP下载 #米兰体育网址</p>\n<p>&nbsp;</p>\n<p><a href=\"https://www.yelp.com/user_details?userid=6P1g4XlB4rSzP2K4PTtAtw\">https://www.yelp.com/user_details?userid=6P1g4XlB4rSzP2K4PTtAtw</a></p>\n<p><a href=\"https://milansport2026.mn.co/members/39069551\">https://milansport2026.mn.co/members/39069551</a></p>\n<p><a href=\"https://www.invelos.com/UserProfile.aspx?Alias=milansport2026\">https://www.invelos.com/UserProfile.aspx?Alias=milansport2026</a></p>\n<p><a href=\"https://about.me/milansport2026/getstarted\">https://about.me/milansport2026/getstarted</a></p>\n<p><a href=\"https://participa.aytojaen.es/profiles/milansport2026\">https://participa.aytojaen.es/profiles/milansport2026</a></p>\n<p><a href=\"https://account.archdaily.com/us/users/profile\">https://account.archdaily.com/us/users/profile</a></p>\n<p><a href=\"https://vocal.media/authors/-tq5h7u0i37\">https://vocal.media/authors/-tq5h7u0i37</a></p>\n<p><a href=\"https://app.brancher.ai/user/UMLiyyn_P93s\">https://app.brancher.ai/user/UMLiyyn_P93s</a></p>\n<p><a href=\"https://js.checkio.org/user/milansport2026/\">https://js.checkio.org/user/milansport2026/</a></p>\n<p><a href=\"https://joinentre.com/profile/milansport2026\">https://joinentre.com/profile/milansport2026</a></p>\n<p><a href=\"https://onlinesequencer.net/forum/user-259526.html\">https://onlinesequencer.net/forum/user-259526.html</a></p>\n<p><a href=\"https://aetherlink.app/users/7440980648241954816\">https://aetherlink.app/users/7440980648241954816</a></p>\n<p><a href=\"https://kktix.com/user/8589242\">https://kktix.com/user/8589242</a></p>\n<p><a href=\"https://ctxt.io/2/AAD4m79nFQ\">https://ctxt.io/2/AAD4m79nFQ</a></p>\n<p><a href=\"https://mlsport2026.stck.me/profile\">https://mlsport2026.stck.me/profile</a></p>\n<p><a href=\"https://axe.rs/forum/members/mlsport2026.13420750/#about\">https://axe.rs/forum/members/mlsport2026.13420750/#about</a></p>\n<p><a href=\"https://habr.com/ru/users/milansport2026/\">https://habr.com/ru/users/milansport2026/</a></p>\n<p><a href=\"https://www.openstreetmap.org/user/milansport2026\">https://www.openstreetmap.org/user/milansport2026</a></p>\n<p><a href=\"https://www.walkscore.com/place-details/milansport2026-phnom-penh\">https://www.walkscore.com/place-details/milansport2026-phnom-penh</a></p>\n<p><a href=\"https://maps.roadtrippers.com/people/milansport2026\">https://maps.roadtrippers.com/people/milansport2026</a></p>\n<p><a href=\"https://wikifab.org/wiki/Utilisateur:Milansport2026\">https://wikifab.org/wiki/Utilisateur:Milansport2026</a></p>\n<p><a href=\"https://joy.bio/milansport2026\">https://joy.bio/milansport2026</a></p>\n<p><a href=\"https://tidal.com/@milansport2026\">https://tidal.com/@milansport2026</a></p>\n<p><a href=\"https://tupalo.com/@u8396767\">https://tupalo.com/@u8396767</a></p>\n<p><a href=\"https://wefunder.com/milansport\">https://wefunder.com/milansport</a></p>\n<p><a href=\"https://codeberg.org/milansport2026\">https://codeberg.org/milansport2026</a></p>\n<p><a href=\"https://ameblo.jp/milansport2026/\">https://ameblo.jp/milansport2026/</a></p>\n<p><a href=\"https://www.pearltrees.com/milansport2026\">https://www.pearltrees.com/milansport2026</a></p>\n<p><a href=\"https://odesli.co/4p3s4hprghqqc\">https://odesli.co/4p3s4hprghqqc</a></p>\n<p><a href=\"https://song.link/milansport2026\">https://song.link/milansport2026</a></p>\n<p><a href=\"https://estar.jp/users/2010300434\">https://estar.jp/users/2010300434</a></p>\n<p><a href=\"https://camp-fire.jp/profile/milansport2026\">https://camp-fire.jp/profile/milansport2026</a></p>\n<p><a href=\"https://fliphtml5.com/zh_cn/home/milansport2026\">https://fliphtml5.com/zh_cn/home/milansport2026</a></p>\n<p><a href=\"https://sites.google.com/view/milansport2026\">https://sites.google.com/view/milansport2026</a></p>\n<p><a href=\"https://ko-fi.com/milansport2026\">https://ko-fi.com/milansport2026</a></p>\n<p><a href=\"https://www.fiverr.com/milansport?public_mode=true\">https://www.fiverr.com/milansport?public_mode=true</a></p>\n<p><a href=\"https://www.speedrun.com/users/milansport2026\">https://www.speedrun.com/users/milansport2026</a></p>\n<p><a href=\"https://www.inventoridigiochi.it/membri/mlsports2026/profile/\">https://www.inventoridigiochi.it/membri/mlsports2026/profile/</a></p>\n<p><a href=\"https://padlet.com/milansport2026/padlet-8o65krsn8mkqc59b\">https://padlet.com/milansport2026/padlet-8o65krsn8mkqc59b</a></p>\n<p><a href=\"https://myanimelist.net/profile/milansport2026\">https://myanimelist.net/profile/milansport2026</a></p>\n<p><a href=\"https://naijamatta.com/milansport2026\">https://naijamatta.com/milansport2026</a></p>\n<p><a href=\"https://youtrust.jp/users/milansport2026\">https://youtrust.jp/users/milansport2026</a></p>\n<p><a href=\"https://milansport2026.carrd.co/\">https://milansport2026.carrd.co/</a></p>\n<p><a href=\"https://listography.com/milansport2026\">https://listography.com/milansport2026</a></p>\n<p><a href=\"https://www.deviantart.com/mlsport2026\">https://www.deviantart.com/mlsport2026</a></p>\n<p><a href=\"https://milansport2026.booth.pm/\">https://milansport2026.booth.pm</a></p>\n<p><a href=\"https://www.pearltrees.com/milansport\">https://www.pearltrees.com/milansport</a></p>\n<p><a href=\"https://coolors.co/u/milansport2026\">https://coolors.co/u/milansport2026</a></p>\n<p><a href=\"https://www.last.fm/zh/user/milansport2026\">https://www.last.fm/zh/user/milansport2026</a></p>\n<p><a href=\"https://www.pozible.com/profile/milansport2026\">https://www.pozible.com/profile/milansport2026</a></p>\n<p><a href=\"https://bestadsontv.com/profile/524234/-\">https://bestadsontv.com/profile/524234/-</a></p>\n<p><a href=\"https://www.clickasnap.com/profile/milansport2026\">https://www.clickasnap.com/profile/milansport2026</a></p>\n<p><a href=\"https://www.renderosity.com/users/milansport2026\">https://www.renderosity.com/users/milansport2026</a></p>\n<p><a href=\"https://boosty.to/milansport2026\">https://boosty.to/milansport2026</a></p>\n<p><a href=\"https://justpaste.it/u/milansport2026\">https://justpaste.it/u/milansport2026</a></p>\n<p><a href=\"https://newspicks.com/user/12286360/\">https://newspicks.com/user/12286360/</a></p>\n<p><a href=\"https://replit.com/@milansport2026\">https://replit.com/@milansport2026</a></p>\n<p><a href=\"https://glose.com/u/milansport2026\">https://glose.com/u/milansport2026</a></p>\n<p><a href=\"https://www.beatstars.com/milansport2026\">https://www.beatstars.com/milansport2026</a></p>\n<p><a href=\"https://suzuri.jp/milansport2026\">https://suzuri.jp/milansport2026</a></p>\n<p><a href=\"https://www.exchangle.com/milansport2026\">https://www.exchangle.com/milansport2026</a></p>\n<p><a href=\"https://audiomack.com/milansport2026\">https://audiomack.com/milansport2026</a></p>\n<p><a href=\"https://start.me/p/MbNLOP\">https://start.me/p/MbNLOP</a></p>\n<p><a href=\"https://www.designspiration.com/mlsport2026\">https://www.designspiration.com/mlsport2026</a></p>\n<p><a href=\"https://onlyfans.com/milansport2026\">https://onlyfans.com/milansport2026</a></p>\n<p><a href=\"https://www.credly.com/users/milansport2026\">https://www.credly.com/users/milansport2026</a></p>\n<p><a href=\"https://codepen.io/milansport2026\">https://codepen.io/milansport2026</a></p>\n<p><a href=\"https://projectnoah.org/users/milansport2026\">https://projectnoah.org/users/milansport2026</a></p>\n<p><a href=\"https://www.dcfever.com/users/profile.php?id=1272344\">https://www.dcfever.com/users/profile.php?id=1272344</a></p>\n<p><a href=\"https://ourairports.com/members/milansport2026/\">https://ourairports.com/members/milansport2026/</a></p>\n<p><a href=\"https://hanson.net/users/milansport2026\">https://hanson.net/users/milansport2026</a></p>\n<p><a href=\"https://www.khadas.com/profile/milansport2026/profile\">https://www.khadas.com/profile/milansport2026/profile</a></p>\n<p><a href=\"https://kitsu.app/users/milansport2026\">https://kitsu.app/users/milansport2026</a></p>\n<p><a href=\"https://connect.gt/user/milansport2026\">https://connect.gt/user/milansport2026</a></p>\n<p><a href=\"http://freestyler.ws/user/638088/milansport2026\">http://freestyler.ws/user/638088/milansport2026</a></p>\n<p><a href=\"https://www.vidlii.com/user/milansport2026\">https://www.vidlii.com/user/milansport2026</a></p>\n<p><a href=\"https://teletype.in/@mlsport2026\">https://teletype.in/@mlsport2026</a></p>\n<p><a href=\"https://www.reverbnation.com/mlsport2026\">https://www.reverbnation.com/mlsport2026</a></p>\n<p><a href=\"https://pxhere.com/zh/photographer-me/4946716\">https://pxhere.com/zh/photographer-me/4946716</a></p>\n<p><a href=\"https://feyenoord.supporters.nl/profiel/142535/milansport2026\">https://feyenoord.supporters.nl/profiel/142535/milansport2026</a></p>\n<p><a href=\"https://kaeuchi.jp/forums/users/milansport2026/\">https://kaeuchi.jp/forums/users/milansport2026/</a></p>\n<p><a href=\"https://coub.com/milansport2026\">https://coub.com/milansport2026</a></p>\n<p><a href=\"https://www.bandlab.com/milansport2026\">https://www.bandlab.com/milansport2026</a></p>\n<p><a href=\"https://openlibrary.org/people/milansport2026\">https://openlibrary.org/people/milansport2026</a></p>\n<p><a href=\"https://pinshape.com/users/8927160-milansport2026\">https://pinshape.com/users/8927160-milansport2026</a></p>\n<p><a href=\"https://wellfound.com/u/milansport2026\">https://wellfound.com/u/milansport2026</a></p>\n<p><a href=\"https://hashnode.com/@milansport2026\">https://hashnode.com/@milansport2026</a></p>\n<p><a href=\"https://bit.ly/m/milansport2026\">https://bit.ly/m/milansport2026</a></p>\n<p><a href=\"https://odysee.com/@milansport2026:e\">https://odysee.com/@milansport2026:e</a></p>\n<p><a href=\"https://orcid.org/0009-0008-0341-5267\">https://orcid.org/0009-0008-0341-5267</a></p>\n<p><a href=\"https://www.producthunt.com/@milansport2026\">https://www.producthunt.com/@milansport2026</a></p>\n<p><a href=\"https://archive.org/details/@milansport2026\">https://archive.org/details/@milansport2026</a></p>\n<p><a href=\"https://www.patreon.com/cw/milansport2026\">https://www.patreon.com/cw/milansport2026</a></p>\n<p><a href=\"https://disqus.com/by/milansport2026/about/\">https://disqus.com/by/milansport2026/about/</a></p>\n<p><a href=\"https://wordpress.com/reader/users/milansport2026\">https://wordpress.com/reader/users/milansport2026</a></p>\n<p><a href=\"https://www.etsy.com/hk-en/people/qe7dvblcbo5flecr\">https://www.etsy.com/hk-en/people/qe7dvblcbo5flecr</a></p>\n<p><a href=\"https://flipboard.com/@milansport2026\">https://flipboard.com/@milansport2026</a></p>\n<p><a href=\"https://www.magcloud.com/user/milansport2026\">https://www.magcloud.com/user/milansport2026</a></p>\n<p><a href=\"https://opencollective.com/milansport2026\">https://opencollective.com/milansport2026</a></p>\n<p><a href=\"https://www.clarinetu.com/profile/milansport2026/profile\">https://www.clarinetu.com/profile/milansport2026/profile</a></p>\n<p><a href=\"https://genius.com/milansport2026\">https://genius.com/milansport2026</a></p>\n<p><a href=\"https://noti.st/milansport2026/bio\">https://noti.st/milansport2026/bio</a></p>\n<p><a href=\"https://experiment.com/users/milansport2026\">https://experiment.com/users/milansport2026</a></p>\n<p><a href=\"https://hackaday.io/milansport2026\">https://hackaday.io/milansport2026</a></p>\n<p><a href=\"https://potofu.me/milansport2026\">https://potofu.me/milansport2026</a></p>\n<p><a href=\"https://allmylinks.com/milansport2026\">https://allmylinks.com/milansport2026</a></p>\n<p><a href=\"https://www.intensedebate.com/people/milansport2026\">https://www.intensedebate.com/people/milansport2026</a></p>\n<p><a href=\"https://gifyu.com/milansport2026\">https://gifyu.com/milansport2026</a></p>\n<p><a href=\"https://500px.com/p/milansport2026\">https://500px.com/p/milansport2026</a></p>\n<p><a href=\"https://filmfreeway.com/mlsport2026\">https://filmfreeway.com/mlsport2026</a></p>\n<p><a href=\"https://cara.app/milansport2026/about\">https://cara.app/milansport2026/about</a></p>\n<p><a href=\"https://letterboxd.com/milansport2026/\">https://letterboxd.com/milansport2026/</a></p>\n<p><a href=\"https://dlive.tv/milansport2026\">https://dlive.tv/milansport2026</a></p>\n<p><a href=\"https://www.mixcloud.com/mlsport2026\">https://www.mixcloud.com/mlsport2026</a></p>\n<p><a href=\"https://unsplash.com/@milansport2026\">https://unsplash.com/@milansport2026</a></p>\n<p><a href=\"https://www.awwwards.com/milansport2026/\">https://www.awwwards.com/milansport2026/</a></p>\n<p><a href=\"https://qiita.com/milansport2026\">https://qiita.com/milansport2026</a></p>\n<p><a href=\"https://www.band.us/@milansport2026\">https://www.band.us/@milansport2026</a></p>\n<p><a href=\"https://anyflip.com/homepage/lcueq#About\">https://anyflip.com/homepage/lcueq#About</a></p>\n<p><a href=\"https://wakelet.com/@milansport2026\">https://wakelet.com/@milansport2026</a></p>\n<p><a href=\"https://www.adsfare.com/milansport2026\">https://www.adsfare.com/milansport2026</a></p>\n<p><a href=\"https://luma.com/user/milansport2026\">https://luma.com/user/milansport2026</a></p>\n<p><a href=\"https://www.pixiv.net/users/124479893\">https://www.pixiv.net/users/124479893</a></p>\n<p><a href=\"https://issuu.com/milansport2026\">https://issuu.com/milansport2026</a></p>\n<p><a href=\"https://www.skillshare.com/en/user/milansport2026\">https://www.skillshare.com/en/user/milansport2026</a></p>\n<p><a href=\"https://medium.com/@milansport2026\">https://medium.com/@milansport2026</a></p>\n<p><a href=\"https://www.indiegogo.com/en/profile/milansport2026\">https://www.indiegogo.com/en/profile/milansport2026</a></p>\n<p><a href=\"https://independent.academia.edu/milansport2026\">https://independent.academia.edu/milansport2026</a></p>\n<p><a href=\"https://www.quora.com/profile/Milansport2026\">https://www.quora.com/profile/Milansport2026</a></p>\n<p><a href=\"https://mastodon.social/@milansport2026\">https://mastodon.social/@milansport2026</a></p>\n<p><a href=\"https://gitlab.com/misport2026\">https://gitlab.com/misport2026</a></p>\n<p><a href=\"https://bsky.app/profile/milansport2026.bsky.social\">https://bsky.app/profile/milansport2026.bsky.social</a></p>\n<p><a href=\"https://www.reddit.com/user/milansport2026\">https://www.reddit.com/user/milansport2026</a></p>\n<p><a href=\"https://www.tumblr.com/milansport2026\">https://www.tumblr.com/milansport2026</a></p>\n<p><a href=\"https://www.flickr.com/people/204334157@N03/\">https://www.flickr.com/people/204334157@N03/</a></p>\n<p><a href=\"https://github.com/milansport2026\">https://github.com/milansport2026</a></p>\n<p><a href=\"https://www.behance.net/milansport2026\">https://www.behance.net/milansport2026</a></p>\n<p><a href=\"https://www.instagram.com/milansports2026/\">https://www.instagram.com/milansports2026/</a></p>\n<p><a href=\"https://www.pinterest.com/milansport2026\">https://www.pinterest.com/milansport2026</a></p>\n<p><a href=\"https://soundcloud.com/milansport2026\">https://soundcloud.com/milansport2026</a></p>\n<p><a href=\"https://www.youtube.com/@milansport2026\">https://www.youtube.com/@milansport2026</a></p>\n<p><a href=\"https://www.twitch.tv/milansport2026\">https://www.twitch.tv/milansport2026</a></p>\n<p><a href=\"https://x.com/milansport2026\">https://x.com/milansport2026</a></p>\n<p><a href=\"https://leetcode.com/u/milansport\">https://leetcode.com/u/milansport</a></p>",
        "topics": [
            {
                "id": 4533,
                "name": "米兰体育",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1156,
                "name": "app",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166311,
            "forum_user": {
                "id": 166075,
                "user": 166311,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/221dfe96166f9c5959a35e048ec373f7?s=120&d=retro",
                "biography": "米兰体育(https://milan-sports.com)是一家专注于全球体育赛事互动的综合娱乐平台,秉承专业、稳定与创新的发展理念,为广大用户提供高品质的线上娱乐体验。平台聚焦足球、篮球、网球、电竞等热门赛事,实时同步全球赛程数据,更新迅速,玩法多样,满足不同用户的竞猜与观赛需求。依托成熟的技术架构与高效的运营体系。米兰体育不仅涵盖体育投注板块,还整合真人娱乐、电子游戏、棋牌对战、彩票互动等多元化娱乐资源,形成丰富完整的产品矩阵。米兰体育坚持公平透明的运营原则,采用多重技术防护机制,保障账户信息与资金安全,构建稳定可靠的娱乐环境。\n\n#米兰体育 #米兰体育官网 #米兰体育APP #米兰体育下载 #米兰APP下载 #米兰体育网址",
                "date_modified": "2026-04-01T07:16:06.653812+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "milansport2026",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "",
        "pk": 4565,
        "published": false,
        "publish_date": "2026-04-01T07:14:53.144655+02:00"
    },
    {
        "title": "test",
        "description": "test",
        "content": "<p>test</p>",
        "topics": [],
        "user": {
            "pk": 7,
            "forum_user": {
                "id": 7,
                "user": 7,
                "first_name": "Guillaume",
                "last_name": "Pellerin",
                "avatar": "https://forum.ircam.fr/media/avatars/1eab97a.jpg",
                "avatar_url": "/media/cache/53/5e/535ebbe12d04be860b0ae9e511f8c53d.jpg",
                "biography": "Guillaume Pellerin is a researcher and a developer in acoustics, audio processing, data science and web interfaces. He graduated from the french Arts et Métiers engineering high school in 2000, received a M.Sc in acoustics and signal processing in 2001 and a PhD in mechanics and nonlinear acoustics in 2006. He has been an associate professor and researcher at the Conservatoire des Arts et Métiers in Paris on topics related to sound and physics such as room acoustics, aeroacoustics, electroacoustics, electricity, signal processing and computer programming. He funded the Parisson company in 2008 to develop innovative, open source and collaborative platforms dedicated to computational musicology, digital humanities (Telemeta project collectively awarded by the CNRS in 2018), a live video e-learning solution for schools and several electronic music productions. From 2017, within the Innovation and Research Means Department of IRCAM, he leads the Web Team driving projects related to communication, audio-visual production, music data preservation, collaborative development (Forum) and participates to some research and european projects related to sciences, technologies and arts.",
                "date_modified": "2025-06-11T15:48:57.979253+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": true,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 388,
                        "forum_user": 7,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 11,
                                "membership": 388
                            },
                            {
                                "id": 18,
                                "membership": 388
                            },
                            {
                                "id": 792,
                                "membership": 388
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "pellerin",
            "first_name": "Guillaume",
            "last_name": "Pellerin",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 26,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 77,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 453,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 604,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 604,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 104,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 599,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 520,
                    "user": 7,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "test-1",
        "pk": 2561,
        "published": false,
        "publish_date": "2023-09-22T13:21:30.035945+02:00"
    },
    {
        "title": "test",
        "description": "test",
        "content": "<p>test</p>",
        "topics": [],
        "user": {
            "pk": 7,
            "forum_user": {
                "id": 7,
                "user": 7,
                "first_name": "Guillaume",
                "last_name": "Pellerin",
                "avatar": "https://forum.ircam.fr/media/avatars/1eab97a.jpg",
                "avatar_url": "/media/cache/53/5e/535ebbe12d04be860b0ae9e511f8c53d.jpg",
                "biography": "Guillaume Pellerin is a researcher and a developer in acoustics, audio processing, data science and web interfaces. He graduated from the french Arts et Métiers engineering high school in 2000, received a M.Sc in acoustics and signal processing in 2001 and a PhD in mechanics and nonlinear acoustics in 2006. He has been an associate professor and researcher at the Conservatoire des Arts et Métiers in Paris on topics related to sound and physics such as room acoustics, aeroacoustics, electroacoustics, electricity, signal processing and computer programming. He funded the Parisson company in 2008 to develop innovative, open source and collaborative platforms dedicated to computational musicology, digital humanities (Telemeta project collectively awarded by the CNRS in 2018), a live video e-learning solution for schools and several electronic music productions. From 2017, within the Innovation and Research Means Department of IRCAM, he leads the Web Team driving projects related to communication, audio-visual production, music data preservation, collaborative development (Forum) and participates to some research and european projects related to sciences, technologies and arts.",
                "date_modified": "2025-06-11T15:48:57.979253+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": true,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 388,
                        "forum_user": 7,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 11,
                                "membership": 388
                            },
                            {
                                "id": 18,
                                "membership": 388
                            },
                            {
                                "id": 792,
                                "membership": 388
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "pellerin",
            "first_name": "Guillaume",
            "last_name": "Pellerin",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 26,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 77,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 453,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 604,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 604,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 104,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 599,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 7,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 520,
                    "user": 7,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "test-2",
        "pk": 2562,
        "published": false,
        "publish_date": "2023-09-22T13:24:50.318120+02:00"
    },
    {
        "title": "Quantum Fields Synthesis by Robert B. Lisek",
        "description": "Quantum Fields Synthesis represents a novel approach to sound generation using quantum physics, particularly quantum field theory, as fundamental building blocks. While classical synthesis describes sound as mechanical waves requiring a medium to propagate, QFS replaces traditional oscillators with quantum fields, each defined by a quantized wave function and its dynamics. This system produces sound where complex wave functions, probability amplitudes, energy transitions, and spinor interactions generate unique audio experiences.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/02dscf0366.jpg\" alt=\"\" width=\"859\" height=\"572\" /></p>\r\n<p></p>\r\n<p>Presented by : Robert B. Lisek</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/lisek/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>Quantum Fields Synthesis represents a novel approach to sound generation using quantum physics, particularly quantum field theory, as fundamental building blocks. While classical synthesis describes sound as mechanical waves requiring a medium to propagate, QFS replaces traditional oscillators with quantum fields, each defined by a quantized wave function and its dynamics.</p>\r\n<p>This system produces sound where complex wave functions, probability amplitudes, energy transitions, and spinor interactions generate unique audio experiences. The dynamics of quantum states are precisely controlled via Dirac and Klein-Gordon models.</p>\r\n<p>The project employs a rigorous quantum sound synthesis framework, utilizing IBM's real quantum computing hardware and quantum neural networks to scale up the system. This enables control over the behavior of a swarm of software agents representing quantum states and their time dynamics, using wave-function measurements, energy levels, spins, and operators for superposition and entanglement.</p>\r\n<p>QFS results in new sound &nbsp;textures with unique timbral properties, spectral components, and non-classical patterns. The frequency spectra depend directly on the quantum states, spin dynamics define changes in sound timbre, energy jumps trigger transitions and leaps, and eigenstates modify the sound texture.</p>\r\n<p></p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/lisek_all_square.jpg\" alt=\"\" width=\"781\" height=\"781\" /></p>\r\n<p style=\"background: rgb(235.9,237.8,255);\"><code><span>import numpy as np</span></code><br /><code><span>from qiskit import QuantumCircuit, Aer, execute</span></code><br /><code><span>from qiskit.quantum_info import Statevector</span></code><br /><code><span>from qiskit.circuit.library import PauliEvolutionGate</span></code><br /><code><span>from qiskit.opflow import PauliSumOp, Pauli</span></code><br /><code><span># Function quantum field</span></code><br /><code><span>def simulate_field(num_qubits, dt, num_steps, m, lambda_coupling):</span></code><br /><code><span>&nbsp; &nbsp; # Define Hamiltonian terms</span></code><br /><code><span>&nbsp; &nbsp; hamiltonian_terms = []</span></code><br /><code><span>&nbsp; &nbsp; for i in range(num_qubits):</span></code><br /><code><span>&nbsp; &nbsp; &nbsp; &nbsp; hamiltonian_terms.append((0.5, Pauli(f\"Z{i}\"))) &nbsp;# Kinetic term</span></code><br /><code><span>&nbsp; &nbsp; &nbsp; &nbsp; if i &lt; num_qubits - 1:</span></code><br /><code><span>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; hamiltonian_terms.append((0.5, Pauli(f\"X{i} X{i+1}\"))) &nbsp;# Potential term</span></code><br /><code><span>&nbsp; &nbsp; &nbsp; &nbsp; hamiltonian_terms.append((0.5 * m**2, Pauli(f\"X{i}\"))) &nbsp;# Mass term</span></code><br /><code><span>&nbsp; &nbsp; &nbsp; &nbsp; hamiltonian_terms.append((lambda_coupling, Pauli(f\"X{i} X{i} X{i} X{i}\"))) &nbsp;# Interaction term</span></code><br /><code><span>&nbsp; &nbsp; hamiltonian = PauliSumOp.from_list(hamiltonian_terms)</span></code><br /><code><span>&nbsp; &nbsp; # Time evolution</span></code><br /><code><span>&nbsp; &nbsp; qc = QuantumCircuit(num_qubits)</span></code><br /><code><span>&nbsp; &nbsp; initial_state = np.zeros(2**num_qubits)</span></code><br /><code><span>&nbsp; &nbsp; initial_state[2**(num_qubits//2)] = 1.0</span></code><br /><code><span>&nbsp; &nbsp; qc.initialize(initial_state, range(num_qubits))</span></code><br /><code><span>&nbsp; &nbsp; for step in range(num_steps):</span></code><br /><code><span>&nbsp; &nbsp; &nbsp; &nbsp; evolution_gate = PauliEvolutionGate(hamiltonian, time=dt)</span></code><br /><code><span>&nbsp; &nbsp; &nbsp; &nbsp; qc.append(evolution_gate, range(num_qubits))</span></code></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2627,
                "name": "quantum computing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2624,
                "name": "quantum field theory",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 819,
                "name": "sound synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2625,
                "name": "sound texture",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2626,
                "name": "wave function",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 21154,
            "forum_user": {
                "id": 21143,
                "user": 21154,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Lisek_portrait_rb_lisek46_2.jpg",
                "avatar_url": "/media/cache/8c/c5/8cc537299368c10d31af34af793faaf4.jpg",
                "biography": "Robert B. Lisek is an artist, mathematician and composer who focuses on systems, networks and processes (computational, biological, social). He is involved in a number of projects focused on media art, creative storytelling and interactive art. Drawing upon post-conceptual art, software art and meta-media, his work intentionally defies categorization. Lisek is a pioneer of art based on Artificial Intelligence and Machine Learning. Lisek is also a composer of contemporary music, author of many projects and scores on the intersection of spectral, stochastic, concret music, musica futurista and noise. Lisek is a founder of Fundamental Research Lab and ACCESS Art Symposium. He is the author of 300 exhibitions and concerts, among others: SIBYL - ZKM Karlsruhe; SIBYL II - IRCAM Center Pompidou; QUANTUM ENIGMA - Harvestworks Center New York and STEIM Amsterdam; TERROR ENGINES - WORM Center Rotterdam, Secure Insecurity - ISEA Istanbul; DEMONS - Venice Biennale (accompanying events); Manifesto vs. Manifesto - Ujazdowski Cartel of Contemporary Art, Warsaw; NEST - ARCO Art Fair, Madrid; Float - Lower Manhattan Cultural Council, NYC; WWAI - Siggraph, Los Angeles.",
                "date_modified": "2025-04-15T22:29:55.560395+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lisek",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3282,
                    "user": 21154,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "quantum-fields-synthesis",
        "pk": 3282,
        "published": true,
        "publish_date": "2025-02-12T16:42:26+01:00"
    },
    {
        "title": "Essay Creating Help - Be aware of What Writers Say Regarding Their Papers Items",
        "description": "Essay Creating Help - Be aware of What Writers Say Regarding Their Papers Items",
        "content": "<p>The best essay writing service is a combination of expertise, talent and expertise. This is often due to the fact basically the most effective freelance writers can offer you fantastic outcomes concerning maximizing and proofreading the features of beginner essayists. Let us have a appear at many of the sides of these types of authors who In my view delivers the best essay writing service on your own.</p>\n<p>&nbsp;The first factor to look for within the <a href=\"https://www.reddit.com/r/BestCustomEssay/comments/q15pvt/best_essay_writing_service_reddit_20212022/\">best essay writing service 2022</a> is to concentrate to the charge. A lot of the instances, youthful writers start their occupation with pupil essay developing web sites. If you're unable to pay for to pay for the month to month costs, it really is not encouraged to operate with them, nevertheless. So you can buy the costs, you really should be capable of display that your proposition is special. You will need to provide the potential to reveal evidence of operate by delivering a exam posting or some extremely equivalent run you might have presently completed. It really is moreover vital to examine the style of conversation custom they now have.</p>\n<p>This element will likely be imperative that you the quality of your composing. You must know irrespective of in the event the freelance writers are made for speaking efficiently. Essay writers needs to be ready to recognize your place without having to vacation resort to flowery words and phrases that the majority viewers are usually not in the position to stick to. As soon as you learn the best essay creating methods, make sure you function along with people that understand your requirements and anticipations.</p>\n<p>Together with the conversation layout, it's also advisable to confirm how responsive the essay making qualified companies are. Will they be well timed within their reactions? Is it possible to get in contact with them very easily? It's possible you'll just find yourself renewing your agreement when they are late of their responses. You should be capable to talk to your providers anytime you have inquiries or other complications. You'll find some enterprises that offer for particular variations of their guidance. It is best to try out signing up having a firm that provides personalized paper creating providers in order for you to acquire a far more personalised encounter.</p>\n<p>If you want, you are going to manage to opt for the matters, the tone, the framework as well as incorporate a customized assertion with the author. Evidently, you must not completely rely on the rankings and viewpoints supplied by men and women over the internet. You'll want to use the internet to carry out analyze on the particular essay manufacturing support. Identify on the net discussions regarding the enterprise too as their companies. Check out the weblogs and dialogue board world-wide-web web pages of various people that have employed their experienced expert services. Also it is possible to feel about tips dispersed by your educators and also your instructors.</p>\n<p>When you know some crucial details regarding the best essay writing service, it could also assist you to a whole lot. Extremely initial, you'll want to know just how the report author composes a papers. Most qualified essay freelance writers are expected to occur by having an college diploma. Not all writers are college pupils, but the majority of them are. Other than employing a instruction, they should have sizeable knowledge of scholastic making and proofreading. They should to even have a inventory portfolio that reveals their best manufacturing skills.</p>\n<p>After you are searching for an appropriate essay creating assistance evaluate, it could be greatest to investigate the encounters of scholars and writers that have presently applied their solutions in advance of. You could study a great deal more about their functionality in faculty periodicals and web sites. You could also think about the reviews created by faculty college students to the ordeals in utilizing such a assistance. Professional and qualified writers will normally have great responses for this kind of options. A superb essay support ought to function having a guidance support expert services that is definitely undoubtedly provided 24 several hours per day.</p>\n<p>The views offered should really be dispersed by purchasers which have previously utilized the help. This gives you with considerably more assurance you'll get the best essay writing service about. There are plenty of positive aspects so that you can decide on a best essay writing service. In order for you very affordable papers, this kind of services might help you obtain this aim. Pupils from round the earth will compose for these distinct remedies because of into the point they understand that there may be no require to pay for a lot of revenue on their own papers.</p>\n<p>On top of that, they know that making use of this providers allows help save them a lot of time. Ensure that to search for comments out of your friends so that you can choose about the most effective assistance to make use of should you be a college student and they are preparing to utilize such a assistance. In summary, you must give attention to what exactly is having reported regarding the essay freelance writers.</p>\n<p>Should they wish to thrive during the marketplace, it truly is crucial for writers being open up about interaction tradition. In case you are an element of the interaction tradition, then you definately will not likely have any difficulty together with the essays they offer in your programs. Your interaction capabilities will endure in case you are not a part of this lifestyle. Recall to normally get recognize in the facts which is surely currently being delivered thanks towards the actuality it can help you realize good results as well as your essay making aid.</p>",
        "topics": [],
        "user": {
            "pk": 25612,
            "forum_user": {
                "id": 25585,
                "user": 25612,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/64b328b30449e721c132b48246d32b17?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "garybowling",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "essay-creating-help-be-aware-of-what-writers-say-regarding-their-papers-items",
        "pk": 1005,
        "published": false,
        "publish_date": "2021-11-26T16:12:23.928242+01:00"
    },
    {
        "title": "test",
        "description": "test",
        "content": "<p>test</p>",
        "topics": [],
        "user": {
            "pk": 107169,
            "forum_user": {
                "id": 107035,
                "user": 107169,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5ee3f8087b01a14de92873a1fa990b77?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-04-14T18:03:04.635493+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lejournaliste",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "test-4",
        "pk": 3359,
        "published": false,
        "publish_date": "2025-03-17T21:02:45.975585+01:00"
    },
    {
        "title": "How to Easily Connect to Orbi for Fast and Reliable Wi-Fi",
        "description": "Learn how to quickly connect to Orbi and enjoy seamless, high-speed Wi-Fi throughout your home. Follow simple steps to set up your router and satellites, ensuring optimal coverage and security.",
        "content": "<p>If you want to enjoy strong, consistent Wi-Fi at home, learning how to <a href=\"https://orbisetup.com/\"><strong>connect to Orbi</strong> </a>is essential. Orbi is a mesh Wi-Fi system that uses a main router and satellites to eliminate dead zones and deliver reliable internet throughout your home.</p>\n<p>To get started, power on your Orbi router and make sure it is properly connected to your internet service. Place the router in a central location for the best coverage. Then, position your Orbi satellites in rooms where Wi-Fi is weak. Proper placement ensures devices automatically switch between the router and satellites, giving seamless connectivity.</p>\n<p>Next, open your device&rsquo;s Wi-Fi settings, select the Orbi network, and enter the password. Once connected, devices will automatically connect to Orbi whenever they are in range. For optimal performance, keep your Orbi system updated, and make sure satellites are within range of the main router.</p>\n<p>Following these steps, you can quickly and securely <strong>connect to Orbi</strong> and enjoy fast, reliable internet on all your devices. With proper setup, Orbi delivers a smooth, uninterrupted online experience for work, streaming, gaming, and more.</p>",
        "topics": [],
        "user": {
            "pk": 166312,
            "forum_user": {
                "id": 166076,
                "user": 166312,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f0a8620edf2d4e04d72fb90d5ca72ed6?s=120&d=retro",
                "biography": "You can easily connect to orbi if you know the configuration process. You can add an orbi device to your network system using the orbi app or web. To connect using the web you have to open the setup page using the orblogin.com web address. Instead of a web address you can also use an IP address to reach the portal. To proceed with an app based method you have to download the orbi app on your client device from Google Play Store or App store as per your device operating system. Click here to grab complete step by step instructions on the setup process. You can also communicate with our technical experts if you need help.",
                "date_modified": "2026-04-01T07:39:51.553414+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "rachelwhite24",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "how-to-easily-connect-to-orbi-for-fast-and-reliable-wi-fi",
        "pk": 4566,
        "published": false,
        "publish_date": "2026-04-01T07:36:24.797325+02:00"
    },
    {
        "title": "Encounters - Cyan D'Anjou, Jeanyoon Choi, Yue Song",
        "description": "Audio installation about the contextual biases people bring into public spaces.",
        "content": "<p><span>This project employs a selection of conversations collected and sampled from across London and abroad to shape a 3D specialised experiential installation that draws on the use of surveillance technologies in action and crime solving entertainment media to create a disruption of the perfect flow of a pre-written story arc. Paralleling AI deep fake technologies through the sounds&rsquo; edited landscape, we question the point at which the ease of creating a false narrative suitable to our personal bias overtakes the essential labour of addressing the root of the systemic issues of data bias and surveillance.</span></p>\r\n<p><span>&nbsp;The viewer experiences the piece as an overlapping collection of sounds that parallels an outdoor public meeting space. As audience members move around the public installation, particular sets of voices become clearer, feeling like being let into secret plot points and clues.</span></p>\r\n<p><span>&nbsp;We have sourced conversations from locations that are purposefully public and the presence of surveillance technologies made visibly clear, but vary in time and location, such to create anonymity and variation to ensure each fragment would otherwise un-remixed feel entirely distinct. By being curated together, we as observers intrinsically fill the gaps and imagine the links in the stories they hear, a confirmation bias of their own lived experiences. At the end of the piece&rsquo;s duration, all sound mixes to form an indistinguishable m&eacute;lange of noise, distorting the narrative&ndash; rendering it nonsensical.</span></p>\r\n<p>Created by Cyan D'Anjou, Jeanyoon Choi, and Yue Song</p>",
        "topics": [
            {
                "id": 1212,
                "name": "data bias",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1213,
                "name": "public",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 38567,
            "forum_user": {
                "id": 38516,
                "user": 38567,
                "first_name": "Cyan",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/ACC_Cyan.jpg",
                "avatar_url": "/media/cache/c5/f2/c5f2f639afd0932d53b13b2ddc85ef89.jpg",
                "biography": "Cyan D’Anjou (b. 2000, Netherlands) is a speculative sculpture artist and media creator with a background in technology design and innovation ethics from Stanford University. Prior to joining RCA’s Information Experience Design program, she created tactile installations around AI’s growing presence in our everyday and the subsequent cultural and psychological changes that follow the normalisation of data capitalism. Her works have been exhibited internationally at venues including the High Museum of Art, SOMArts, Saatchi Gallery, and at Sonsbeek ‘16. Currently, her work takes on a speculative quality as she envisions the potential impacts of current societal advancements, which she often expresses in the form of multidisciplinary sculptures, videos, and installations. Cyan is particularly interested in investigating behavioural shifts as the divide between the virtual and physical worlds becomes more blurred. A central question in her work is, “how can human expression and identity be elevated in a steadily more automated world?”",
                "date_modified": "2025-02-23T22:32:05.779863+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cyandanjou",
            "first_name": "Cyan",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "encounters-2",
        "pk": 2122,
        "published": true,
        "publish_date": "2023-03-28T09:46:15+02:00"
    },
    {
        "title": "\"Reverse Engineering 101: Buchla 700 as a Case Study\" by Kweiwen Tseng (Taiwan)",
        "description": "This talk covers the reverse engineering of the Buchla 700, by tracing its firmware, decoding documents, and rebuilding its DSP algorithms. It follows the process from a PureData prototype to a JUCE plugin and Daisy hardware implementation, showing how digital archaeology can recreate legacy instruments. The Buchla 700’s design also informs modern neural audio synthesis, such as DDSP frameworks. The session features a PureData patch demo, plugin showcase, and modular hardware presentation.",
        "content": "<p></p>\r\n<p>The Buchla 700, released in the late 1980s, represents a unique moment in electronic music history, where a digitally-aided synthesizer was combined with an ambitious user interface design.&nbsp;It featured pressure-, position-, and touch-sensitive plates, control voltage receptacles, and SMPTE synchronization for external monitors. On the software side, beyond generating and modifying its polyphonic voices, it included preset storage, a score editor, and a sequencer, enabling users to transition between studio composition and live performance.</p>\r\n<p>&nbsp;</p>\r\n<p>Additionally, it featured a highly flexible synthesis architecture with four oscillators and six indices, resulting in 12 topologies. Each routing produced different modulation behaviors, including Frequency Modulation (FM), Ring Modulation (RM), and Timbre Modulation (TM). Every oscillator and index could be controlled by an Envelope Generator, referred to in Buchla's terminology as a Function Generator. This design allowed users to program complex envelopes, even condition-based envelopes.</p>\r\n<p>&nbsp;</p>\r\n<p>This lecture explores the reverse engineering process of the Buchla 700, covering original firmware, limited video footage, performance recordings, technical manuals, documents, and other developers' projects. It traces the workflow from early prototyping in Pure Data, reconstruction of the DSP algorithm using C++ and JUCE, and hardware implementation on the Daisy platform. We demonstrate how digital archaeology can recover lost designs and recreate a fully functional modern instrument.</p>\r\n<p>&nbsp;</p>\r\n<p>In the age of neural networks, revisiting these vintage instruments is not merely nostalgic; it also offers valuable insight into differentiable sound synthesis. We will briefly demonstrate how Buchla 700's architecture can inform modern neural audio models, particularly within frameworks like DDSP, where interpretable synthesis structures are essential.</p>",
        "topics": [
            {
                "id": 3521,
                "name": "buchla",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 672,
                "name": "Ddsp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3518,
                "name": "frequency modulation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3520,
                "name": "juce",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3519,
                "name": "phase modulation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2567,
                "name": "synth",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1779,
                "name": "Synthesizer",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 129844,
            "forum_user": {
                "id": 129671,
                "user": 129844,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5be5e8be6710d4d64f8f9f5d46c945ee?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-10-09T08:16:53.251177+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "kweiwen",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "reverse-engineering-101-buchla-700-as-a-case-study-by-kweiwen-tseng-taiwan",
        "pk": 3782,
        "published": true,
        "publish_date": "2025-10-08T17:09:58+02:00"
    },
    {
        "title": "\"3SNtv: Live Spatial Audio Broadcasting\" by Randall Packer",
        "description": "3SNtv is Zakros InterArts’ project for a 24/7 “artist‑television” channel built for live, interactive spatial audio and full resolution video. At the IRCAM Forum Workshops 2026, Randall Packer will present the vision and conceptual prototype of 3SNtv, alongside an end‑to‑end, standards‑based workflow—from SPAT Revolution ambisonic staging to MPEG‑H object authoring, 4K/immersive encoding, and OTT playback on consumer home‑theater systems. The session is intended as an invitation to composers, engineers, and acousticians to partner with Zakros and contribute repertoire, host research nodes, and refine shared practices for immersive broadcasting.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><img src=\"https://thirdspacenetwork.com/wp-content/uploads/2025/10/StudioPhotos-2253-1024x683.jpg\" /></p>\r\n<h6 style=\"text-align: center;\"><em><strong>Zakros InterArts Underground Studio Bunker in Washington, DC&nbsp;</strong></em></h6>\r\n<h5><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<p><em><strong></strong></em></p>\r\n<p><strong>3SNtv (Third Space Network Television)</strong> is&nbsp;a research project of&nbsp;<a href=\"https://www.zakros.com\" title=\"Zakros InterArts\">Zakros InterArts</a><strong>,</strong> conceived as a 24/7 artist‑television channel for live, spatial media. Until now, the performance of 3D audio has resided primarily in research labs, production studios, and specialized venues. We see the future as the online dissemination of immersive broadcasting from the studio to the home system: a seamless pipeline that delivers the performance of spatially composed networked music and video art to consumer playback systems. The project is built on Zakros&rsquo; <strong>Telematic Theater</strong> including SPAT Revolution&mdash;a Max‑based virtual mise-en-sc&egrave;ne for 3D audio-visual online performance&mdash;designed to deliver object‑based immersive audio with remote performers and synchronized visuals to multichannel sound systems, head‑mounted displays, or binaural headphones. The Telematic Theater is current being developed in collaboration with Th&eacute;ophile Clet, Federico Foderaro, Mathew Ostrowski, and Julien Bayle.&nbsp;</p>\r\n<p><strong>3SNtv</strong> is a <a href=\"https://www.zakros.com\" title=\"Zakros InterArts\">Zakros InterArts</a> research initiative that treats broadcast as a compositional space: a channel where telematic performances, artist interviews, and research demonstrations circulate as one continuous, always-on stream. The aim is to move immersive audio beyond specialist contexts by delivering spatially composed, networked music from the studio to the home system&mdash;while preserving both the creative intent of the mix and the experience of liveness. Rather than relying on proprietary ecosystems, 3SNtv is being built around open, widely deployed standards&mdash;centered on MPEG‑H 3D Audio&mdash;and interoperable workflows that can be adopted and extended by artists and research partners.</p>\r\n<p>At the IRCAM Forum Workshops 2026, we will present the project&rsquo;s conceptual prototype and the validated end‑to‑end pipeline that underpins it. In the studio, SPAT Revolution renders third order ambisonic scenes and object‑based staging; these mixes are then authored for interactivity using Fraunhofer&rsquo;s MPEG‑H Authoring Suite, where audio objects can be defined and streamed as metadata.&nbsp; Programs are encoded as 4K video with immersive audio and packaged for adaptive OTT (Over the Top) delivery (DASH/CMAF) to home theaters, including seamless switching between scheduled live events and archived programming. On the viewer side, a dedicated 3SNtv application for Google TV/Android TV supports MPEG‑H passthrough to consumer televisions and AVRs (receivers), with practical fallbacks (PCM, binaural, or stereo) where end‑to‑end support is not available. Low‑latency operation targets a few seconds of delay&mdash;small enough to support real‑time cues and companion social engagement layers without compromising stability.</p>\r\n<p>3SNtv is conceived as social broadcasting: a federation of networked nodes&mdash;composers and artists' studios, universities, research labs, and arts organizations: that can host research, contribute repertoire, and co‑develop standards‑aware practices for truly live, interactive spatial media delivered to global audiences. Research for 3SNtv is a result of the 2025 3D Audio Dialogues series led by Randall Packer and spatial audio pioneer Jean‑Marc Jot, with contributions from leading scientists and practitioners of 3D audio including: Agnieszka Roginska, Paul Geluso, Thibaut Carpentier, Markus Noisternig, Olivier Warusfel, Ceri Thomas, Jani Huoponen, Dafna Naphtal, and Georg Hajdu. We would like to express our deepest thanks to the many artists, scientists, and engineers whose participation continues to shape both the Telematic Theater and 3SNtv.</p>\r\n<p><a href=\"https://www.zakros.com\" title=\"Zakros InterArts\">Zakros InterArts</a><br /><a href=\"https://www.thirdspacenetwork.com\" title=\"Third Space Network\">Third Space Network</a></p>",
        "topics": [
            {
                "id": 3985,
                "name": "artist-television",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 849,
                "name": "interactive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3986,
                "name": "live streaming",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3987,
                "name": "MPEG-H",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3984,
                "name": "networked",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2151,
            "forum_user": {
                "id": 2149,
                "user": 2151,
                "first_name": "RANDALL",
                "last_name": "Packer",
                "avatar": "https://forum.ircam.fr/media/avatars/Randall_Packer-Headshot.jpeg",
                "avatar_url": "/media/cache/31/42/3142121f9bc166a4c1ffc9111730ff69.jpg",
                "biography": "Randall Packer is a media artist, composer, writer, and educator working at the intersection of networked performance and immersive sound. He is Artistic Director of Zakros InterArts, a fully online alternative arts organization based in Washington DC, and has overseen the creation of the Telematic Theater, a networked platform for the creation of online performance and experimental broadcast forms that connect studios, performers, and audiences across distance in real time. \n\nPacker’s work bridges music composition, technology, media theory, and dramaturgy. He holds a Ph.D. in Music Composition from the University of California, Berkeley, an M.F.A. in Music Composition from the California Institute of the Arts, and a Certificate in Computer Music from IRCAM/Centre Pompidou. \n\nPacker’s practice, for more than 30 years, has advanced a coherent throughline: to couple aesthetic inquiry with technical rigor in order to deliver immersive, participatory performance beyond the confines of the traditional theater venue. He brings a collaborative practice aimed at networking, leading to a shared, open toolkit for collaborating artists and engineers working in immersive multimedia practices.",
                "date_modified": "2026-03-14T17:24:31.358159+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 488,
                        "forum_user": 2149,
                        "date_start": "2023-10-09",
                        "date_end": "2026-10-26",
                        "type": 0,
                        "keys": [
                            {
                                "id": 54,
                                "membership": 488
                            },
                            {
                                "id": 233,
                                "membership": 488
                            },
                            {
                                "id": 816,
                                "membership": 488
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "rpacker",
            "first_name": "RANDALL",
            "last_name": "Packer",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 38,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 387,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 599,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 394,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 98,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 38,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 645,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 492,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 487,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 613,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 111,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 277,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2516,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 117,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3218,
                    "user": 2151,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "3sntv-live-spatial-audio-broadcasting",
        "pk": 4141,
        "published": true,
        "publish_date": "2026-01-05T21:45:53+01:00"
    },
    {
        "title": "vssl – Performing with Sound as a Vessel by Xavier Bonfill (open call art music denmark)",
        "description": "vssl is a performative sampler and granular synthesizer that enables real-time sound exploration through audio descriptor-based navigation. In this presentation, I will introduce my artistic approach to vssl as part of my (vessels) series, discussing its conceptual and technical foundations. The session will conclude with a live demonstration, showcasing vssl’s performative and exploratory capabilities.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><span>Art Music Denmark and Ircam are offering a Danish composer or musician the opportunity to take part in the &ldquo;Ircam Forum&rdquo; workshop at the French music institution Ircam from March 26 to 28, 2025.</span></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/_sdi1692.jpg\" alt=\"\" width=\"699\" height=\"466\" /></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Xavier Bonfill</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/xbv/\" target=\"_blank\">Biography</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>vssl </span><span>is a performative sampler and granular synthesizer that enables real-time sound exploration through audio descriptor-based navigation. Instead of traditional sample playback methods, vssl allows performers to dynamically search and manipulate sound libraries based on perceptual parameters like brightness, hardness, harmonicity, and height. </span></p>\r\n<p><span>As a composer and artist, not strictly a developer or researcher, I approach this project as an intricate part of my creative process, focusing on the interplay between sound, technology, and performance. In this presentation, I will introduce my work and artistic background before diving into the conceptual and technical foundations of the instrument, its role in my ongoing series of works &ldquo;(vessels)&rdquo;</span><span>, </span><span>as well as a live demonstration of the instrument, showcasing its performative and exploratory capabilities. </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"c-content__button\"></div>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 97403,
            "forum_user": {
                "id": 97281,
                "user": 97403,
                "first_name": "Xavier",
                "last_name": "Bonfill",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b70819e0bf1b26cb25f4a06d519af838?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-03-13T11:31:14.306764+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "xbv",
            "first_name": "Xavier",
            "last_name": "Bonfill",
            "bookmarks": []
        },
        "slug": "vssl-performing-with-sound-as-a-vessel-by-xavier-bonfill",
        "pk": 3317,
        "published": true,
        "publish_date": "2025-05-03T11:02:53+02:00"
    },
    {
        "title": "Chaotic Assemblages: Sonifying Strange Attractors by Jan Ove Hennig (Germany)",
        "description": "This project investigates consistency versus significance as modes of organization through Strange Attractors - mathematical systems that resist intellectual comprehension but reveal embodied understanding through sonification. It further investigates how sound can reveal proximity to critical thresholds where infinitesimal parameter changes trigger system collapse or explosion, making audible the fragile metastable conditions necessary for complex behavior. The project explores 'collaborative ignorance' - how humans interact productively with mathematical complexity they cannot comprehend. Technical implementation uses Max/MSP with custom strange attractor external and SPAT library for real-time multi-channel spatial dispersion.",
        "content": "<p></p>\r\n<h2>PROJECT CONCEPT</h2>\r\n<p>Audio can reveal the strange attractor mathematical dynamics invisible to other sensory modalities, providing new pathways for understanding complex behavior through sonic pattern recognition. In this sense, sonification functions as consistency probe - exposing the invisible tensions that maintain system stability and detecting proximity to critical breakdown points.</p>\r\n<p>The system enables direct human interaction with mathematical complexity through collaborative ignorance: productive aesthetic collaboration between human intuition and mathematical consistency, even when rational understanding is impossible.</p>\r\n<p>By positioning mathematical elements in three-dimensional sound space, spatial audio enhances the&nbsp; revelatory capacity of sonification and provides additional perceptual dimensions for understanding complex system behavior.</p>\r\n<p>&nbsp;</p>\r\n<h2>THEORETICAL FRAMEWORK</h2>\r\n<h3>1. Consistency vs. Signifiance</h3>\r\n<p><em>\"Consistency concerns precisely the holding together of heterogeneous elements\"</em> (Deleuze &amp; Guattari, <em>A Thousand Plateaus</em>, 1980, p. 329).</p>\r\n<p>By building on the distinction between two fundamental modes of organization, strange attractors demonstrate the principle of <strong>consistency</strong> - coherent behavior among heterogeneous elements without central meaning or control - as opposed to <strong>signifiance</strong>, which organizes elements around predetermined meanings, symbols, or hierarchical structures.</p>\r\n<p>The heterogeneous components in this system include mathematical equations, particle trajectories, visual patterns, sonic parameters, and human aesthetic response, yet no central signifier imposes predetermined meaning, purpose, or symbolic content on their organization. Instead, these elements maintain productive relationships through interaction rather than imposed structure, achieving operational coherence without semantic unity. The project explores how this consistency emerges and breaks down at critical parameter thresholds, making audible the \"work\" required to maintain assemblage coherence.</p>\r\n<p>&nbsp;</p>\r\n<h3>2. Transcendent Control vs. Immanent Organization</h3>\r\n<p>The system operates through the tension between transcendent control - external command imposing goals and meanings as found in traditional musical instruments - and immanent organization, where rules emerge from within system interactions as demonstrated by strange attractor dynamics. Human parameter adjustment introduces transcendent agency through conscious intentions and aesthetic judgments into the immanent mathematical organization, where differential equation rules produce emergent patterns without external direction.</p>\r\n<p>This creates a fundamental question: can transcendent human agency become immanent to the larger human-mathematical assemblage without destroying the mathematical consistency that gives the system its capacity for complex behavior?</p>\r\n<p>&nbsp;</p>\r\n<h3>3. Edge of Chaos Dynamics and Metastability</h3>\r\n<p>Strange attractors exist only within infinitesimally narrow parameter ranges - most values cause system collapse or explosion. This reveals their nature as metastable equilibria that maintain coherence through constant movement rather than static stability. The system operates at critical thresholds where small parameter changes can trigger qualitative transformation, demonstrating sensitive dependence where tiny adjustments push the system between radically different behavioral regimes.</p>\r\n<p>The project focuses on how sonification reveals proximity to these critical thresholds, making audible the fragile conditions necessary for complex behavior. Through sound, we can detect the invisible tensions that maintain system stability and anticipate approaching breakdown points before they become visually apparent.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"Attractor 2\" src=\"https://forum.ircam.fr/media/uploads/user/a73bca87e65dcc14f4b4ecf1ce5a7800.png\" /></p>\r\n<p><img alt=\"Attractor 1\" src=\"https://forum.ircam.fr/media/uploads/user/744434798c95db640e6674498ff64556.png\" /></p>",
        "topics": [
            {
                "id": 3454,
                "name": "Collaborative Ignorance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 276,
                "name": "Spat 5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3453,
                "name": "Strange Attractor",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 59124,
            "forum_user": {
                "id": 59059,
                "user": 59124,
                "first_name": "Jan Ove",
                "last_name": "Hennig",
                "avatar": "https://forum.ircam.fr/media/avatars/Kabuki_Portrait_-_Processed.jpg",
                "avatar_url": "/media/cache/d0/7f/d07f990b002b5d863a5794680b842936.jpg",
                "biography": "I'm a sound artist and music producer based in Frankfurt, Germany with a passion for sharing knowledge. I've worked as lecturer at the Abbey Road Institute in Frankfurt (with focus on Max/MSP and sound synthesis) and developed video series for Softube (Modular Sound Explorations) and Korg (Sequencing Strategies) among others. In addition to releasing music and performing live with my modular synthesizer I'm also exhibiting large-format audio installations based around my interests in 3d printing, microcontrollers and their interactions with sensors and physical objects.",
                "date_modified": "2025-12-08T20:39:01.777661+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 965,
                        "forum_user": 59059,
                        "date_start": "2024-10-17",
                        "date_end": "2025-10-17",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "kabuki",
            "first_name": "Jan Ove",
            "last_name": "Hennig",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2759,
                    "user": 59124,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "chaotic-assemblages",
        "pk": 3741,
        "published": true,
        "publish_date": "2025-10-03T10:40:28+02:00"
    },
    {
        "title": "Modalys 3.8.1 released",
        "description": "Modalys new version 3.8.1 is available, in sync with Forum 2023.",
        "content": "<p>Modalys, Ircam's virtual lutherie technology, was introduced as early as 1989 and has always been enhanced and maintained since then!</p>\r\n<p><a href=\"https://forum.ircam.fr/projects/detail/modalys/\">Modalys 3.8.1</a> is a significant maintenance update that brings a lot of bug fixes and improvements, especially to the (relatively) new Lua engine.</p>\r\n<p>The <a href=\"https://support.ircam.fr/docs/Modalys/current/Controllers/controller_lua.html\">mlys.lua</a> controller, introduced in Modalys 3.7, carries out the idea of a new scriptural approach within the graphic and real time environment of Max (making it the equivalent of non real time Lisp), allowing a bigger control over the details than with the traditional \"mlys\" graphical approach. This new approach is particularly suitable for 3D sounding objects (finite elements).</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5ebbf4fa755ff84c80f4230d8b67afce.png\" /></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 194,
                "name": "3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 820,
                "name": "finite elements",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1256,
                "name": "modal",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 214,
                "name": "Physical Modeling Engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 818,
                "name": "physical models",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 129,
                "name": "Real time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17617,
            "forum_user": {
                "id": 17613,
                "user": 17617,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/65285f24050c7dbd54422824b1a7c7cb?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-08-31T13:33:58.886455+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 737,
                        "forum_user": 17613,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "robert_p",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "modalys-381-released",
        "pk": 2167,
        "published": true,
        "publish_date": "2023-03-29T17:02:12+02:00"
    },
    {
        "title": "Tutoriel Modalys n°3 : The Confused Flatulence Tube",
        "description": "Troisième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p style=\"text-align: justify;\"><strong>Dans ce tutoriel, nous construisons un simple tube qui est activ&eacute; par un objet mono-deux masses par le biais d'une connexion &agrave; anche.</strong></p>\r\n<p style=\"text-align: justify;\">Comme toujours, je commence par Modalisp, puis je passe &agrave; OpenMusic et je termine avec Max. Il y a des signets dans la description vid&eacute;o sur YouTube, mais la plupart des explications se font dans la partie Modalisp.</p>\r\n<p style=\"text-align: justify;\">La connexion &agrave; anche est pour le moins d&eacute;licate. Et la documentation vous r&eacute;serve quelques pi&egrave;ges :-).... De plus, il semble se comporter diff&eacute;remment dans Max (par rapport &agrave; Modalisp). Cela peut &ecirc;tre un peu frustrant et d&eacute;routant lorsqu'on essaie de cr&eacute;er quelque chose de pr&eacute;cis, mais d'un autre c&ocirc;t&eacute;, la non-lin&eacute;arit&eacute; dans cette connexion a beaucoup de potentiel pour pouvoir produire un son tr&egrave;s vivant.</p>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/VVX2FU1OxZQ\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: left;\"><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.</strong></p>",
        "topics": [
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 133,
                "name": "Sound synthesis and treatment",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n3-the-confused-flatulence-tube",
        "pk": 725,
        "published": true,
        "publish_date": "2020-08-18T12:10:53+02:00"
    },
    {
        "title": "Workshop Binaural Externalization Processing - Jean-Marc JOT",
        "description": "Presentation during the Ircam Forum Workshop 2023 In Paris",
        "content": "<p>In both entertainment and professional applications, conventionally produced stereo or multi-channel audio content is frequently delivered over headphones or earbuds. Use cases involving object-based binaural audio rendering include recently developed immersive multi-channel audio distribution formats, along with the accelerating deployment of virtual or augmented reality applications and head-mounted displays. The appreciation of these listening experiences by end users may be compromised by an unnatural perception of the localization of frontal audio objects: commonly heard near or inside the listener&rsquo;s head even when their specified position is distant. In this demonstration, examples are presented to illustrate the differences between audio rendered with traditional stereo panning, binaural processing, and a recently proposed externalization processing method.</p>",
        "topics": [],
        "user": {
            "pk": 20758,
            "forum_user": {
                "id": 20749,
                "user": 20758,
                "first_name": "Jean-Marc",
                "last_name": "Jot",
                "avatar": "https://forum.ircam.fr/media/avatars/jmj_2023b_whitebg.png",
                "avatar_url": "/media/cache/43/5c/435c8591db0f56f21cc34332821b283a.jpg",
                "biography": "Globally recognized audio technology innovator in consumer electronics and pro markets, currently focusing more particularly on immersive audio, hearing personalization and music technology innovation.  I founded Virtuel Works to help accelerate the development and deployment of audio, voice and music computing technologies that will power immersive experiences.  Previously, I initiated and drove the development of novel sound processing technologies, platforms and standards for virtual and augmented reality, gaming, broadcast, cinema, and music creation - with Magic Leap, Creative Labs, DTS / Xperi, and iZotope / Native Instruments.  Before relocating to California in the late 90s, I conducted research at IRCAM in Paris, where I created the Spat software library for immersive music creation and performance.  Fellow of the Audio Engineering Society, regular speaker in industry and academic events.  Authored numerous publications and patents on digital audio signal processing.",
                "date_modified": "2025-04-16T18:29:13.648099+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jmjot",
            "first_name": "Jean-Marc",
            "last_name": "Jot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3392,
                    "user": 20758,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "binaural-externalization-processing",
        "pk": 2061,
        "published": true,
        "publish_date": "2023-02-14T17:11:01+01:00"
    },
    {
        "title": "OVERTON 3D audio synthesizer by Martin Antiphon",
        "description": "Overton is a 3D audio synthesizer inspired by classic synthesizers. Its founding intention is to make three-dimensional sound exploration accessible without requiring the learning of new musical gestures. Although compatible with advanced control modes, Overton was designed to enable the writing of 3D space based on existing instrumental practices, making spatialization a natural extension of the language of classical synthesizers.",
        "content": "<div>\r\n<p><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p><strong>Overton</strong>&nbsp;is a three-dimensional audio synthesizer inspired by classic sound synthesis architectures. It is primarily based on additive and substractive synthesis principles, a set of traditional modulations, all integrated into an advanced spatialization engine.</p>\r\n</div>\r\n<p>Overton's central feature uses <strong>Decorrelated Spatial Synthesis</strong>. Each polyphonic voice consists of several independent voices (called instances to avoid confusion with polyphonic voices), each incorporating a complete signal path including an oscillator, filter, amplifier, and a coordinate generator. A decorrelation engine acts on these instances using high-level controls, taking into account their position in space. This architecture makes it possible to generate not just point sources of sound, but textures occupying extended areas of three-dimensional space.</p>\r\n<p>The use of modulations and controls familiar to traditional synthesizers allows the sound space to be sculpted intuitively. Low-frequency generators, themselves composed of several decorrelated instances, can simultaneously control synthesis parameters&mdash;such as amplitude or frequency modulation&mdash;and spatial parameters. This results in a close relationship between the characteristics of sounds and their placement in space. When these modulations are applied to different instances of a single polyphony voice, they produce unique soundscapes: rhythmic patterns with long wavelengths, and denser, more textured structures at higher frequencies.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/13f503d7876aae68c8bb9bcaf0f80d4c.png\" /></p>\r\n<p>Overton also includes a trajectory generator, which can be controlled either manually, by modulation sources, or by an envelope generator. These trajectories can be assigned individually to each polyphonic voice. Combined with classic functions such as keyboard split, this feature allows certain note ranges to be applied in specific spatial functions.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/57030a0d74b3d013df15168ffd284783.png\" /></p>\r\n<p>The synthesizer also offers standard features such as preset management and export, MIDI mapping, and full parameter control via OSC. Additional control devices&mdash;expression wheel and pedal, as well as support for certain MPE features&mdash;offer extended interaction possibilities.</p>\r\n<p>Overton was developed as a standalone Max application. Rendering is performed whith SPAT and natively offers binaural, 7.1.4 and 4th order ambisonic outputs. It is currently being rewritten to be released as a VST plug-in.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/ad20d5ffbf0076b11f457fd5b3fd3cf7.png\" /></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 275,
                "name": "Max apps",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1894,
                "name": "MusicUnit",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 276,
                "name": "Spat 5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1779,
                "name": "Synthesizer",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1021,
            "forum_user": {
                "id": 1021,
                "user": 1021,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PortraitMU_Martin_Antiphon.jpg",
                "avatar_url": "/media/cache/32/34/3234bcf828a4be0f8a1b4026963834e4.jpg",
                "biography": "Sound engineer, 3D audio designer, producer and composer, Martin Antiphon is leaving his position as sound manager at IRCAM in 2010 to join the Music Unit team. He already has numerous studio collaborations to his credit with Ibrahim Maalouf, Balake Sissoko, Rone or Vanessa Wagner, as well as concerts throughout Europe as a live electronic performer for Kaija Saariaho, Sivan Eldar and Sebastian Rivas. On the strength of his mastery of traditional mixing techniques and spatial audio technologies, Martin is now working on converging his skills in the field of immersive audio. He is currently CTO of Music Unit, within wich he has developed a patented 3D audio synthesiser. However Martin continues to create and recently inaugurated his first sound installation, Lo Parlament, in his home town of Pau.\nSince 2022, Martin is vice-president of the French section of the AES.",
                "date_modified": "2026-02-25T17:51:20.352692+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": true,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 486,
                        "forum_user": 1021,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "MartinAntiphon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "overton-3d-audio-synthesiser-by-martin-antiphon",
        "pk": 4350,
        "published": true,
        "publish_date": "2026-02-11T18:37:34+01:00"
    },
    {
        "title": "Le corps du corpus. Embodied Interaction From Machine-Learning In Human-Machine Improvisation (Interaction incarnée à partir de l'apprentissage automatique dans l'improvisation homme-machine).",
        "description": "L'article de Pierre Saint-Germier, Clément Canonne et Marco Fiorini a été accepté pour la conférence AIMC2024 (The International Conference on AI and Musical Creativity, 9-11 septembre 2024, Oxford, UK).",
        "content": "<p><strong>Pierre Saint-Germier</strong> (IRCAM STMS, &eacute;quipe APM) et <strong>Marco Fiorini</strong> (IRCAM STMS, &eacute;quipe RepMus) participeront &agrave; la <a href=\"https://aimc2024.pubpub.org\">conf&eacute;rence internationale sur l'IA et la cr&eacute;ativit&eacute; musicale</a> <strong>AIMC24</strong> &agrave; l'<strong>Universit&eacute; d'Oxford</strong>, au Royaume-Uni (du 9 au 11 septembre 2024), pour pr&eacute;senter l'article de recherche &laquo; <strong>The Corpus' Body. Embodied Interaction From Machine-Learning In Human-Machine Improvisation</strong> &raquo; (par Pierre Saint-Germier, Cl&eacute;ment Canonne, et Marco Fiorini).</p>\r\n<p>Cette &eacute;tude, qui fait partie du projet <a href=\"https://reach.ircam.fr\">ERC REACH</a> dirig&eacute; &agrave;&nbsp;l'Ircam par G&eacute;rard Assayag, consiste en une recherche conceptuelle et exp&eacute;rimentale sur la question de la (d&eacute;s)incarnation dans les logiciels de co-improvisation homme-machine bas&eacute;s sur l'apprentissage automatique, en se concentrant sp&eacute;cifiquement sur l'application de co-cr&eacute;ation <a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">Somax2</a>, d&eacute;velopp&eacute;e dans le cadre du m&ecirc;me projet REACH.</p>\r\n<p>Lire l'article complet <a href=\"https://aimc2024.pubpub.org/pub/jylagyzp/release/1\">ici</a>.</p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2186,
                "name": "AIMC",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2185,
                "name": "APM",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 748,
                "name": "co-creativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 267,
                "name": "Corpus",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2184,
                "name": "RepMus",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "the-corpus-body-embodied-interaction-from-machine-learning-in-human-machine-improvisation",
        "pk": 2980,
        "published": true,
        "publish_date": "2024-09-05T15:55:44+02:00"
    },
    {
        "title": "Max a 30 ans",
        "description": "Le logiciel Max est un environnement visuel destiné aux artistes, aux musiciens, aux designers sonores, aux enseignants et aux chercheurs.",
        "content": "<p>Le logiciel Max est un environnement visuel destin&eacute; aux artistes, aux musiciens, aux designers sonores, aux enseignants et&nbsp;aux chercheurs.</p>\r\n<p>Avec Max/Msp on cr&eacute;e des instruments, des effets, des interfaces personnalis&eacute;es. Ce langage est autant utilis&eacute; pour les performances live que pour la composition ou les arts num&eacute;riques.Ce qui a fait de Max un outil avantgardiste est qu'il part d'un principe assez singulier : un espace de travail similaire &agrave; une page blanche. &nbsp;On y cr&eacute;e des boites qui un r&ocirc;le sp&eacute;cifique: les entr&eacute;es (inlets) et sorties (outlets). Et puis on laisse libre cours &agrave; ces d&eacute;sirs. On&nbsp;glisse, on connecte entre-elles et on&nbsp;g&eacute;n&eacute;re, recoit ou diffuse des informations et des valeurs, constituant ainsi des outils de traitement de diff&eacute;rentes natures (signaux midis, audio, videos, etc).</p>\r\n<p><img src=\"/media/uploads/capture_d&rsquo;&eacute;cran_2019-05-09_&agrave;_16.07.05.png\" width=\"1369\" height=\"908\" /></p>\r\n<p>Il s'agira &agrave; partir de Max de se frotter &agrave; la cr&eacute;ation sonore et &agrave; la programmation&nbsp;sans&nbsp;la connaissance pr&eacute;alable de langages de programmation. Cet outil permet donc &agrave; priori un acc&egrave;s universel et on s'y inscrit selon son niveau et ses besoins. Certains y trouveront la possibilit&eacute;&nbsp;de r&eacute;pondre ad hoc &agrave; leurs besoins artistiques, d'autres seront entrain&eacute;s dans&nbsp;le go&ucirc;t de la programmation et cr&eacute;erons leurs propres applications qui seront ensuite affranchies de l'environnement de programmation lui m&ecirc;me.</p>\r\n<p>Selon le premier manuel d'il y a 30 ans Max a &eacute;t&eacute; con&ccedil;u pour ceux qui veulent frapper les limites des programmes habituels de sonorisation pour &eacute;quipement MIDI.</p>\r\n<p>Lors des <a href=\"/agenda/forum-ircam-workshops-march-26-29-2019/detail/\">Ateliers du Forum de mars 2019,</a>&nbsp;nous avons invit&eacute; David Zicarelli de Cycling `74 &agrave; retracer un historique de Max.&nbsp;Voir la vid&eacute;o int&eacute;grale de la conf&eacute;rence.</p>\r\n<p><video width=\"100%\" height=\"150\" poster=\"//medias.ircam.fr/media/cache/a4/c0/a4c082dc4c9ebe7fa3536226a8b18e24.jpg\" controls=\"controls\" data-title=\"David Zicarelli. Max History\" id=\"player\">\r\n                \r\n                  \r\n        \t\t    <source src=\"//medias.ircam.fr/stream/int/video/files/2019/04/23/DavidZicarelli_29mars2019.mov.webm\" type=\"video/webm\" />\r\n        \t\t    <source src=\"//medias.ircam.fr/stream/int/video/files/2019/04/23/DavidZicarelli_29mars2019.mov.mp4\" type=\"video/mp4\" />\r\n                  \r\n                \r\n    \t\t</video></p>\r\n<p>Nous avons profit&eacute; de sa venue aussi pour lui poser quelques questions directement.</p>\r\n<p><iframe width=\"560\" height=\"315\" style=\"height: 315px;\" src=\"https://www.youtube.com/embed/UopLiVwhTJ4\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen=\"allowfullscreen\"></iframe>&nbsp;</p>\r\n<p><iframe width=\"560\" height=\"315\" style=\"height: 315px;\" src=\"https://www.youtube.com/embed/L5xWI_Flmgs\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p>L'ann&eacute;e 2019 est donc une ann&eacute;e importante pour nos amis de Cycling'74.&nbsp;</p>\r\n<p><span>Apr&egrave;s l'Ircam, l'anniversaire de Max s'est poursuivi aux Etats-Unis du 26 au 28 avril 2019 lors de l&rsquo;Expo Cycling '74 au Mass MoCA (mus&eacute;e d'art contemporain du Massachusetts) &agrave; North Adams. Ces trois jours ont &eacute;t&eacute; l'occasion de r&eacute;unir une communaut&eacute; de \"Maxers\" autour de pr&eacute;sentations d'artistes et designers sonores, de performances live, et d'ateliers pratiques portant sur certaines nouvelles fonctionnalit&eacute;s de Max. La \"Science Fair\" a &eacute;galement permis &agrave; tous les participants de pr&eacute;senter leurs projets dans une ambiance joviale et DIY, afin de c&eacute;l&eacute;brer aussi bien le pass&eacute; de Max &nbsp;&mdash; Cort Lippe pr&eacute;sentait une station ISPW, en &eacute;tat de fonctionnement, avec les premi&egrave;res versions de Max ! &mdash; que son futur (par exemple des environnements de VR pilot&eacute;s par Max). Les Maxers ont &eacute;galement pu exp&eacute;rimenter l&rsquo;&eacute;coute holophonique gr&acirc;ce &agrave; un r&eacute;seau WFS (Wave Field Synthesis) de 186 haut-parleurs pilot&eacute;s par Spat et Max4Live. L&rsquo;Ircam &eacute;tait &eacute;galement de la partie: Marta Gentilucci (compositrice en r&eacute;sidence de recherche) et J&eacute;r&ocirc;me Nika (&eacute;quipe RepMus) ont pr&eacute;sent&eacute; la biblioth&egrave;que DYCI2 d&rsquo;agents g&eacute;n&eacute;ratifs pour l&rsquo;improvisation musicale, et Thibaut Carpentier (&eacute;quipe EAC) a fait une d&eacute;monstration de Spat~.</span></p>\r\n<p>L'Universit&eacute; des Arts de Tokyo &agrave; Geidai organisera &eacute;galement sa masterclass d'&eacute;t&eacute; Max du 5 au 9 ao&ucirc;t 2019.</p>\r\n<p>Le logiciel Max est d&eacute;velopp&eacute; et distribu&eacute; par la soci&eacute;t&eacute; Cycling '74. Il est &eacute;galement distribu&eacute; par le Forum Ircam. Plus d'informations <a href=\"http://forumnet.ircam.fr/product/max8-en/\" target=\"_blank\">ici.</a></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 6,
            "forum_user": {
                "id": 6,
                "user": 6,
                "first_name": "Paola",
                "last_name": "Palumbo",
                "avatar": "https://forum.ircam.fr/media/avatars/_DSC8129.jpeg",
                "avatar_url": "/media/cache/fc/4e/fc4eec9cd07d03302b5a8091cf755eb4.jpg",
                "biography": "Paola Palumbo is the events and marketing Manager of Forum Ircam.\nThe Forum Ircam is the community of users of Ircam software that include the platform forum.ircam.fr  and Forum workshop where converge artists and scientists of all around the world.\nFrom 2011 to 2017 she is also coordinator of Research and Creativity Interfaces Department and follow artists in the IRCAM Musical Research Residency Program. \nShe is co-founder of Ircam Live electro concerts (2011-2015) and Forum Hors les Murs events (Seoul 2014, Buenos Aires, Sao Paulo 2015, Taiwan 2016, Santiago de chile 2017, Shanghai 2019, Montreal 2021). \nShe is in charge of several international partnership with universities and cultural organisations.\n\nShe collaborated with several Festival as Image Sonore and Les Vieilles Charrues in charge of program and partnership .\n\nPreviously she received a Master Degree in Public Politics and Social change \"Cultural Project Management\" at the University Pierre Mendès France (UPMF), Institut d’Etudes Politiques (IEP), Observatoire des Politiques Culturelles (OPC), Grenoble, France and a Master Degree in Political Science, University « La Sapienza » Roma, Italy.",
                "date_modified": "2026-03-03T17:50:06.221851+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 424,
                        "forum_user": 6,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [
                            {
                                "id": 343,
                                "membership": 424
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "palumbo",
            "first_name": "Paola",
            "last_name": "Palumbo",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 277,
                    "user": 6,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "max-a-30-ans",
        "pk": 214,
        "published": true,
        "publish_date": "2019-05-09T11:15:37+02:00"
    },
    {
        "title": "How Can a Past Life Regression Course Help You Heal and Transform?",
        "description": "In today’s fast-moving world, many people feel emotionally stuck, confused, or disconnected without fully understanding why. Often, the answers lie beyond our present life experiences.",
        "content": "<div>\n&lt;section class=\"text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]\" dir=\"auto\" data-turn-id=\"request-WEB:6083258a-e2bf-4dcf-a52c-43826ef823df-10\" data-testid=\"conversation-turn-20\" data-scroll-anchor=\"true\" data-turn=\"assistant\"&gt;\n<div>\n<div>\n<div>\n<div>\n<div>\n<div>\n<p>In today&rsquo;s fast-moving world, many people feel emotionally stuck, confused, or disconnected without fully understanding why. Often, the answers lie beyond our present life experiences. This is where a Past Life Regression Course becomes a powerful tool for healing, self-discovery, and transformation. It allows individuals to explore memories believed to be carried from previous lifetimes and understand how they may influence current thoughts, behaviors, and life patterns.</p>\n<p>Past Life Regression (PLR) is a guided therapeutic technique that helps individuals access subconscious memories through deep relaxation or hypnosis. These memories may reveal unresolved emotions, fears, or karmic patterns that continue to affect present life. A well-structured course in past life regression teaches students how to safely guide themselves and others through this process.</p>\n<p>One of the most significant benefits of a <a href=\"https://iivs.com/past-life-regression/\"><strong>Past Life Regression Course</strong></a> is emotional healing. Many people carry unexplained fears, phobias, or recurring challenges that have no clear origin in their current life. Through regression, these patterns can be traced back to past experiences, allowing individuals to release emotional blockages and gain clarity. This healing process often leads to a sense of relief, inner peace, and renewed confidence.</p>\n<p>A comprehensive course typically begins with the fundamentals of the subconscious mind and how it stores memories. Students learn about the connection between past experiences and present behavior. Understanding this relationship is crucial for conducting effective regression sessions. The course also covers techniques of relaxation, visualization, and guided meditation, which are essential for accessing deeper states of consciousness.</p>\n<p>Another important aspect of the training is learning how to conduct sessions ethically and responsibly. Since past life regression involves deep emotional work, it is important to create a safe and supportive environment. Students are trained to guide clients gently, handle emotional responses with care, and ensure a positive and healing experience.</p>\n<p>Practical training is a key component of any effective Past Life Regression Course. Students are given opportunities to practice regression techniques through guided sessions and real-life scenarios. This hands-on approach helps in building confidence and improving skills. Over time, learners become more comfortable in conducting sessions and interpreting the experiences shared by clients.</p>\n<p>In addition to emotional healing, past life regression also promotes self-awareness and spiritual growth. By exploring past life memories, individuals gain a deeper understanding of their soul&rsquo;s journey. This awareness often leads to personal transformation, better decision-making, and a more balanced perspective on life. It helps individuals let go of limiting beliefs and embrace their true potential.</p>\n<p>In recent years, past life regression has gained popularity as a professional career option. Many individuals are turning their interest in spirituality into a meaningful profession. A well-designed course not only teaches the techniques but also guides students on how to establish themselves as professional practitioners. This includes building confidence, developing communication skills, and understanding client needs.</p>\n<p>Another unique advantage of learning past life regression is its ability to improve relationships. Many conflicts and emotional patterns in relationships may have roots in past life connections. By understanding these patterns, individuals can develop empathy, forgiveness, and better communication, leading to healthier and more fulfilling relationships.</p>\n<p>The learning environment also plays a vital role in the overall experience. Being part of a supportive community of learners and mentors enhances growth and motivation. Interaction, feedback, and shared experiences create a deeper understanding of the subject and encourage continuous learning.</p>\n<p>Moreover, a Past Life Regression Course is suitable for a wide range of individuals. Whether you are a beginner, a spiritual seeker, a therapist, or someone looking for personal growth, this course offers valuable insights and skills. It does not require any prior experience, making it accessible to anyone interested in exploring the deeper aspects of life.</p>\n<p>It is important to approach past life regression with an open mind and a balanced perspective. While experiences during regression can be powerful and insightful, they should be used as tools for healing and growth rather than absolute truths. A good course emphasizes this balance, ensuring that students remain grounded while exploring deeper consciousness.</p>\n<p>In conclusion, a Past Life Regression Course is much more than learning a technique&mdash;it is a journey of healing, understanding, and transformation. It helps individuals uncover hidden aspects of themselves, release emotional burdens, and move forward with clarity and confidence. Whether pursued for personal growth or as a professional path, this course opens the door to a deeper connection with oneself and the universe. By embracing the wisdom of past life regression, individuals can create a more peaceful, purposeful, and empowered life.</p>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n&lt;/section&gt;\n</div>",
        "topics": [
            {
                "id": 4553,
                "name": "astrology course",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4555,
                "name": "spell casting course",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4554,
                "name": "tarot card reading course",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166655,
            "forum_user": {
                "id": 166418,
                "user": 166655,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/8a79362000cd53c2400f2bc14a9cf838?s=120&d=retro",
                "biography": "At IIVS, we are dedicated to empowering individuals through the wisdom of ancient sciences and modern spiritual learning. Our institute offers a wide range of courses including astrology, numerology, tarot, and other esoteric practices, designed to be simple, practical, and result-oriented. With expert guidance and structured learning, we help students build deep knowledge along with real-world application.",
                "date_modified": "2026-04-05T10:45:03.799637+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "iivs1233",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "how-can-a-past-life-regression-course-help-you-heal-and-transform",
        "pk": 4596,
        "published": false,
        "publish_date": "2026-04-05T10:48:53.785430+02:00"
    },
    {
        "title": "Main Modal Kecil, Maxwin Bukan Sekadar Mimpi: Starlight Princess Lagi Panas!",
        "description": "Main Modal Kecil, Maxwin Bukan Sekadar Mimpi: Starlight Princess Lagi Panas!",
        "content": "<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p>Popularitas game slot online kian meningkat di kalangan pencinta hiburan digital, terutama dengan kehadiran judul-judul unggulan dari Pragmatic Play. Salah satu yang paling mencuri perhatian adalah Starlight Princess, yang belakangan ramai diperbincangkan lantaran dianggap \"sedang panas-panasnya\". Banyak pemain pemula melaporkan bahwa dengan modal pas-pasan, mereka bisa meraih Maxwin berkat scatter di <a href=\"https://luhurislamika.com\"><strong>situs RTP dewa poker slots terbaru</strong></a> yang kerap turun dan multiplier besar hingga x500.</p>\n<p>Fenomena ini menjadikan Starlight Princess sebagai mesin cuan baru, khususnya bagi akun-akun segar. Para pendatang baru di komunitas dewapoker ramai-ramai berbagi cerita bahwa mereka hanya butuh beberapa puluh putaran untuk langsung memasuki free spin, lalu disambut deretan multiplier yang melipatgandakan kemenangan berkali-kali. Wajar jika game ini kian diminati pemain pemula.</p>\n<p>Artikel ini akan mengupas tuntas bagaimana modal kecil bisa membuka pintu Maxwin di Starlight Princess, strategi efektif agar permainan lebih optimal, serta keuntungan bermain di platform resmi seperti dewapoker dan domino bet.</p>\n<p><img alt=\"1-4.jpg (1329&times;423)\" src=\"https://i.ibb.co.com/ch4Bdr3t/1-4.jpg\"></p>\n<h2>Starlight Princess dan Kisah Maxwin Bermodal Receh</h2>\n<p>Starlight Princess adalah slot bergulungan 6 dengan sistem Pay Anywhere, artinya simbol kemenangan dapat terbentuk di posisi mana pun di layar. Mekanisme ini membuat peluang menang jauh lebih fleksibel dibandingkan slot konvensional. Ditambah dengan fitur Tumble, simbol pemenang akan meledak dan digantikan simbol baru, membuka peluang kemenangan beruntun dalam satu putaran.</p>\n<p>&nbsp;</p>\n<p>Para pemain di domino bet melaporkan bahwa scatter relatif lebih mudah hadir saat bermain dengan akun baru. Cukup dengan 4 scatter, mode free spin langsung aktif. Di sinilah pintu menuju multiplier besar terbuka, di mana sang putri cantik akan memberikan pengali hingga x500. Dengan modal kecil sekalipun, cuan besar bisa diraih jika multiplier ini terus bergulir.</p>\n<p>&nbsp;</p>\n<p>Komunitas domino bet juga tidak kalah ramai membicarakan kemenangan para pemula. Banyak yang mengaku hanya bermodalkan receh, tetapi pulang dengan kantong tebal berkat scatter dan multiplier yang datang bertubi-tubi. Fenomena ini semakin menguatkan anggapan bahwa Starlight Princess memang sedang \"berpihak pada pemain baru\".</p>\n<p>&nbsp;</p>\n<h2>Strategi Ampuh Pemula: Putaran Pendek dan Taruhan Konsisten</h2>\n<p>Meski faktor keberuntungan sangat dominan, strategi tetap diperlukan agar modal kecil bisa dimanfaatkan secara maksimal. Pemain berpengalaman di dewapoker menyarankan untuk memulai dengan putaran manual singkat sekitar 10&ndash;15 kali. Pola ini digunakan untuk mendeteksi apakah scatter sedang aktif sebelum melanjutkan ke sesi putaran yang lebih panjang.</p>\n<p>&nbsp;</p>\n<p>Setelah scatter mulai terlihat, lanjutkan dengan auto spin 20&ndash;30 kali. Banyak pemain melaporkan bahwa strategi ini membuat free spin lebih cepat terpicu, sehingga peluang multiplier besar semakin sering muncul. Dengan pola seperti ini, modal kecil bisa dipertahankan hingga akhirnya kemenangan besar datang.</p>\n<p>&nbsp;</p>\n<p>Selain itu, jangan lupa mengaktifkan fitur Double Chance to Win. Dengan menambahkan sekitar 25% dari taruhan, peluang scatter turun meningkat dua kali lipat. Pemula yang ingin cepat merasakan free spin sebaiknya memanfaatkan fitur ini, karena meskipun taruhan sedikit lebih besar, hasil dari free spin seringkali mampu melipatgandakan saldo dengan cepat.</p>\n<p>&nbsp;</p>\n<h2>Kelebihan Bermain di dewapoker dan domino bet</h2>\n<p>Platform tempat bermain sangat menentukan kenyamanan dan keamanan Anda. dewapoker dan domino bet adalah dua situs terpercaya yang menghadirkan Starlight Princess dengan RTP asli dari Pragmatic Play. Hal ini memastikan bahwa peluang scatter dan multiplier tetap fair tanpa ada rekayasa.</p>\n<p>&nbsp;</p>\n<p>dewapoker unggul dengan penawaran bonus besar untuk akun baru, cashback rutin, hingga event slot eksklusif yang menambah kesempatan free spin. Pemain pemula di <a href=\"https://steclairemur.org\"><strong>bandar ceme online domino bet uang asli</strong></a> bisa memanfaatkan bonus ini untuk memperbesar modal bermain tanpa khawatir saldo cepat ludes.</p>\n<p><img alt=\"1-3.jpg (700&times;350)\" src=\"https://i.ibb.co.com/xSQY11RT/1-3.jpg\"></p>\n<p>Sementara itu, domino bet dikenal dengan transaksi yang cepat, antarmuka yang ramah pengguna, serta komunitas yang aktif. Anda bisa melakukan deposit maupun withdraw dengan mudah, sekaligus mendapat akses ke tips pola scatter terbaru dari sesama anggota komunitas. Kombinasi faktor inilah yang membuat pengalaman bermain lebih aman dan berpotensi mendatangkan cuan lebih cepat.</p>\n<p>&nbsp;</p>\n<p><strong>FAQ: Pertanyaan yang Sering Diajukan</strong></p>\n<ol>\n<li><strong> Apakah benar modal kecil bisa meraih Maxwin di Starlight Princess?</strong><br>Ya. Banyak pemain pemula yang melaporkan kemenangan besar hanya dengan modal kecil berkat scatter dan multiplier hingga x500.</li>\n<li><strong> Bagaimana strategi agar modal kecil lebih efektif?</strong><br>Gunakan putaran manual singkat di awal, lanjutkan dengan auto spin saat scatter aktif, dan aktifkan Double Chance to Win untuk memperbesar peluang free spin.</li>\n<li><strong> Mengapa harus bermain di dewapoker atau domino bet?</strong><br>Karena keduanya adalah platform resmi dengan RTP asli, bonus menarik, transaksi cepat, serta komunitas aktif yang rajin berbagi strategi.</li>\n<li><strong> Apakah akun baru lebih mudah mendapatkan scatter?</strong><br>Banyak testimoni menyebut demikian. Pemain baru kerap melaporkan scatter lebih cepat hadir saat bermain dengan akun segar.</li>\n<li><strong> Apa tips penting agar pemula tidak cepat kehabisan modal?</strong><br>Jaga taruhan tetap kecil, manfaatkan bonus dari platform resmi, dan beristirahat sejenak bila pola scatter sedang tidak muncul.</li>\n</ol>\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 4534,
                "name": "dewapoker",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4535,
                "name": "domino bet",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166320,
            "forum_user": {
                "id": 166084,
                "user": 166320,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/SnapInsta.to_642491714_18087666080132774_2060721589306924146_n.jpg",
                "avatar_url": "/media/cache/16/9c/169c84d0deeea7baa8e86caed12142a3.jpg",
                "biography": "Lagi rame bonus member baru dewapoker, banyak yang spill menang di Starlight Princess auto bikin penasaran! Yuk coba domino bet sekarang sebelum kelewatan!",
                "date_modified": "2026-04-01T08:48:03.087313+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ivahadelia",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "main-modal-kecil-maxwin-bukan-sekadar-mimpi-starlight-princess-lagi-panas",
        "pk": 4567,
        "published": false,
        "publish_date": "2026-04-01T08:48:38.707799+02:00"
    },
    {
        "title": "Holophonic Recording and Virtual Reproduction",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p>An overview of the concept of &ldquo;holophonic&rdquo; sound in which there is focus on capturing the three-dimensional characteristics of a single sound source and reproducing it in the virtual space, recreating a &ldquo;sonic hologram&rdquo; of the sound object itself.</p>",
        "topics": [],
        "user": {
            "pk": 27584,
            "forum_user": {
                "id": 27556,
                "user": 27584,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/873e5ffbd7395e94a95feff35ccd1348?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "psongmuang",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "holophonic-recording-and-virtual-reproduction",
        "pk": 1342,
        "published": true,
        "publish_date": "2022-09-13T16:55:22+02:00"
    },
    {
        "title": "Shallow Steps: Sonification and Spatialisation of the Cognitive Perception of Audiovisual Fractal Spaces - Umut ELDEM",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>&lsquo;Shallow Steps&rsquo; is an audiovisual installation/performance that explores the synaesthetic space between sound, vision, audience, and infinity. The technique and software for the translation of the visuals to sound is created through the composer&rsquo;s research on &lsquo;Cognitive Audiovisual Transformation&rsquo;- an approach to synaesthetic art that prioritizes cognitive elements instead of mathematical correspondences. The self-repeating structure of fractals, especially mathematical fractals such as the Mandelbrot set, is the starting point of the work. Applying certain mathematical formulae on a visual plane creates a specific, self-repeating shape. Zooming into this shape creates more intricate patterns. Theoretically these patterns continue until infinity, ever-changing and yet always unique. In this work it is this pattern that is transformed into sound and given an audiovisual form, as an automata entity.</p>",
        "topics": [],
        "user": {
            "pk": 14904,
            "forum_user": {
                "id": 14901,
                "user": 14904,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Screenshot_2023-02-20_at_11.41.10.png",
                "avatar_url": "/media/cache/29/ca/29cad53a71cf9f5d741fd4a8b88acc66.jpg",
                "biography": "Umut Eldem (°1993) is a composer, pianist, and researcher. His musical works and research focus on the exploration of synaesthesia as an artistic medium. He started his composition studies in the Mimar Sinan State Conservatory in Istanbul, Turkey, studying under Prof. Hasan Uçarsu and Prof. Mehmet Nemutlu. During his Bachelor’s education he has participated in the Erasmus student exchange program and studied under Prof. Francesco Telli in the Santa Cecilia Conservatory in Rome, Italy. After receiving his Bachelor’s diploma in Composition, he has pursued his Master’s studies in the Royal Conservatoire of Antwerp with Prof. Wim Henderickx and Prof. Luc van Hove in Belgium. In the same institution he has done his Post-graduate research, ‘Foundations of Cross-Modal Analytic Thinking’, on the applicability of synaesthesia and colour as an inter-sensory musical concept. \n\nHe has given lectures on his research of synaesthesia, and had his audiovisual works and installations combining sound and colours presented in Belgium, Turkey, Romania, Luxembourg, and Russia. In 2020 he has won the 7th Sampo Composition Contest. His research project ‘Synaesthesia and Sound-colour Associations as An In",
                "date_modified": "2023-11-14T23:07:31.822168+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "umutreldem",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "shallow-steps-sonification-and-spatialisation-of-the-cognitive-perception-of-audiovisual-fractal-spaces",
        "pk": 2075,
        "published": true,
        "publish_date": "2023-02-20T11:57:19+01:00"
    },
    {
        "title": "Dynamic Spatial Mixing for Multi-Channel Audio by Aleksandar Zecevic and Kiran Bhumber",
        "description": "This live demo introduces a system for dynamic spatialization and mixing in multi-channel environments using Max/MSP and IRCAM's SPAT library. The framework combines adaptive spatial rendering with dynamic mixing to create an evolving, responsive sonic field.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>Using 3D positional data from sound sources and the listener's perspective, the system selects or combines multiple spatial rendering methods in real time. Audio<br />objects dynamically influence one another and interact with static beds, generating shifting amplitude and frequency relationships and establishing priority-based behaviour between moving and static spatial elements. Designed for flexibility, the system can quickly adapt to different loudspeaker configurations and venue types, and also provides parallel binaural rendering for headphone monitoring and remote demonstration, making it suitable for both fixed installations and live performance contexts.</p>\r\n<p><img src=\"/media/uploads/adaptivespatialrendering_dynamicmixing_aleksandarzecevic-kiranbhumber-projectpictures_(1).png\" alt=\"\" width=\"1349\" height=\"754\" /></p>",
        "topics": [],
        "user": {
            "pk": 18643,
            "forum_user": {
                "id": 18636,
                "user": 18643,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/935eb4d5ae49c095d2f257e88b536ac0?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-27T05:22:51.362366+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1443,
                        "forum_user": 18636,
                        "date_start": "2026-03-19",
                        "date_end": "2027-03-19",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "azecevic",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 18643,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 42,
                    "user": 18643,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 99,
                    "user": 18643,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 38,
                    "user": 18643,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4290,
                    "user": 18643,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dynamic-spatial-mixing-for-multi-channel-audio-by-aleksandar-zecevic-and-kiran-bhumber-1",
        "pk": 4290,
        "published": true,
        "publish_date": "2026-01-30T11:02:58+01:00"
    },
    {
        "title": "AudioStellar, an open source corpus-based musical instrument for latent sound structure discovery and sonic experimentation by Agustin Spinetto",
        "description": "AudioStellar is a free experimental sampler that uses AI to generate a 2D point map from a folder with audio samples. The sounds included in the map can be played in novel ways, impossible to achieve with traditional DAWs, samplers and even custom code.\r\nWe are a research team constituted by members from the Universidad Nacional de Tres de Febrero (Buenos Aires) and Temple University Japan Campus (Tokyo).\r\nThe project was presented in talks, workshops and seminars, as well as in performances and concerts in numerous international venues: MUTEK, IRCAM, NIME, ICMC, AIMC, Universitat Pompeu Fabra, Tokyo University of the Arts, among others.\r\nAs a result of our global presence, we have recorded an average of 550 visits per month from more than 30 countries and 700 downloads in the last 6 months. Users from all over the world share their experiences and contribute to the development of the program through our forums, creating an ever-growing collaborative community.",
        "content": "<h1><strong></strong></h1>\r\n<h1><img alt=\"logo\" src=\"https://forum.ircam.fr/media/uploads/user/25c7b60c5e58bc0a4b8a4e9c12d4c740.png\" /></h1>\r\n<p>Generating a visual representation of short audio clips&rsquo; similarities are not only useful for organizing and exploring an audio sample library, but it also opens up a new range of possibilities for sonic experimentation.&nbsp;</p>\r\n<p>We present AudioStellar, an open source software that enables creative practitioners to create AI generated 2D visualizations (i.e latent space) of their own audio corpus without programming or machine learning knowledge.&nbsp; Sound artists can play their input corpus by interacting with this computer learned latent space using a user interface that provides built-in modes to experiment with. AudioStellar can interact with other software by MIDI syncing, sequencing, adding audio effects, and more. Creating novel forms of interaction is encouraged through OSC communication or writing custom C++ code using provided framework.&nbsp;</p>\r\n<p>AudioStellar is a free experimental sampler that uses AI to generate a 2D point map from a folder with audio samples. The sounds included in the map can be played in novel ways, impossible to achieve with traditional DAWs, samplers and even custom code.</p>\r\n<p><a href=\"https://youtu.be/KKnRmpiih84?feature=shared\" title=\"AudioStellar,  an experimental sampler powered by AI\">AudioStellar Demo Video</a></p>\r\n<h1>Maps</h1>\r\n<p>The software processes a folder with user-selected sounds to generate an intelligent sound map, placing each sound as a point in a 2D space. On the map, close dots correspond to similar sounds, while distant dots represent different sounds. The dots are grouped into colored clusters to more intuitively differentiate the diverse timbres that compose it.</p>\r\n<p><img alt=\"map 1\" src=\"https://forum.ircam.fr/media/uploads/user/d9ebd5b3150836dd2ae636b7528aa215.png\" /></p>\r\n<p>The map is a visual interface with a double function: it reveals the latent, pre-existing structure in the relationships between the audio samples and also allows the sounds to be reproduced in novel ways through the Units.</p>\r\n<p><img alt=\"map 2\" src=\"https://forum.ircam.fr/media/uploads/user/6e76049216724305c4a3b3edc91b1e32.png\" /></p>\r\n<h2>&nbsp;</h2>\r\n<h2>Units</h2>\r\n<p>AudioStellar provides several Units that allow the interaction with sound samples through new logics and criteria that are not possible to achieve through traditional sampling techniques. Each session can have multiple Units, each with controllable mixing parameters, just like the channels of a mixer. Units take advantage of the unique interface created by AudioStellar to play the chosen collection of audio samples. Moreover, All Units can incorporate effects, and be controlled by OSC or MIDI.</p>\r\n<p>&nbsp;</p>\r\n<h2>Explorer Unit&nbsp;</h2>\r\n<p>It allows listening to the different sounds one by one in a precise way, as well as to create spatial trajectories traced by the user, which behave like loops. It makes it possible to explore the sound collection by listening to the generated map to discover latent timbral relationships.</p>\r\n<p><img alt=\"explorer unit\" src=\"https://forum.ircam.fr/media/uploads/user/7b7060fcfbd2d2e7a799730d2e958a3c.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>&nbsp;</p>\r\n<h2>Particle Unit</h2>\r\n<p>Particles are autonomous agents that move around the map, reproducing any sound they touch. These particles have multiple control parameters and can move through the map as swarms or as explosions.&nbsp;</p>\r\n<p><img alt=\"particle unit\" src=\"https://forum.ircam.fr/media/uploads/user/68b8dfd7cf95e11b571faf714e8b3b31.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /><br /></p>\r\n<h2>Sequence Unit</h2>\r\n<p>It defines a sequence of sounds that are reproduced using distance as rhythm, a tool that transcends traditional musical languages. By using several Sequence Units in parallel it is possible to explore an unexpected rhythmic universe.</p>\r\n<p><img alt=\"sequence unit\" src=\"https://forum.ircam.fr/media/uploads/user/a712c8641f576a56f6a06e62de96a8b8.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h2><br />Morph Unit</h2>\r\n<p>This unit creates sound textures by mixing a region of samples that it plays at different intensities. A tool designed to execute sound gestures by using external physical trajectories or controllers.</p>\r\n<p><img alt=\"morph unit\" src=\"https://forum.ircam.fr/media/uploads/user/422305f78bbcf1022973cb4085d73f66.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /><br /></p>\r\n<h2>OSC Unit</h2>\r\n<p>This unit facilitates the connection of AudioStellar with external programming software (Max, PureData, Python), as it has a library of numerous OSC methods to create custom heuristics.</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2250,
                "name": "audiostellar",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1960,
                "name": "experimental sampler",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 115,
                "name": "Music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1990,
                "name": "sampler",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2251,
                "name": "samples",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 332,
                "name": "sounds",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 22986,
            "forum_user": {
                "id": 22968,
                "user": 22986,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profile_Agustin_Spinetto.jpg",
                "avatar_url": "/media/cache/7a/38/7a38ddefe4ef9ba4b837d53b7f589547.jpg",
                "biography": "Agustín Spinetto is a musician, sound artist and professor from Argentina based in Tokyo. Bachelor in Electronic Arts at the UNTREF University, he had worked as a professor at the same university to later move to Japan and graduate with a Master's Degree in Music and Sound Creation at the Tokyo University of the Arts - 東京藝術大学. His Master’s thesis was related to the use of visual and digital interfaces for music performance.\n\nAgustin has been working with electronic and acoustic music instruments and using new technologies for music experimentation purposes. His works cover a wide variety of techniques, from Live Electronics concerts to art collaborations with visual and plastic artists. His works have been presented at galleries, universities and venues in Buenos Aires, New York, Seoul, Austria and Tokyo. Along his art and music compositions, it is possible to recognize a common thread of, time, distance, migration and social conscience. \nFurthermore, he is a professor in the Communication Studies Mayor at Temple University, Japan Campus and he has been part of AudioStellar Research Team since 2019.",
                "date_modified": "2025-07-26T08:55:59.669307+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 993,
                        "forum_user": 22968,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "agusajs",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 1005,
                    "user": 22986,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "audiostellar",
        "pk": 3017,
        "published": true,
        "publish_date": "2024-10-07T03:06:45+02:00"
    },
    {
        "title": "Session C-LAB Focusing 2025 by Jing-Shiuan Tsang & Chia-Hui Chen",
        "description": "In addition to aiming to help Asian creators brainstorm and utilize mixed media to expand Asia’s own cultural aesthetics, C-LAB have recently been actively exploring the unique auditory experiences created by installation art and emerging irregularly shaped spherical speakers. Dicy2, Spat5, 393-Speaker, and Jacktrip.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/ef94c8803f58eecdccceb7f0f280ad7c.png\" max-width=\"1588\" max-height=\"1069\" /></p>\r\n<p>Presented by&nbsp;Jing-Shiuan Tsang, Chia-Hui Chen</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/tslclab/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<p><strong>Application of Dicy2 in the Production The Day in Gad-Avia</strong></p>\r\n<p>Process - Sound Collection and Training</p>\r\n<p>Using sound data from 2023 to 2024 registered by NanFormosa, it is placed in the Memory Creator for analysis, with Nana performing improvisational interactions. Additionally, a MIDI controller is used to control five sound tracks with Dicy2 as a plugin.</p>\r\n</div>\r\n</div>\r\n<br />\r\n<div>\r\n<div>\r\n<p>Demo - Starting from the second part (around 7 minutes), using Dicy2 (developed by Ircam).<br />https://youtu.be/HXsWx6vaYo8</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d18be37f6db95aa83c326d5e9b622307.png\" width=\"1733\" height=\"1098\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2550f8f9b813451e09dd1aaa7ed376f8.png\" /></p>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<p><strong>Interfaced SPAT5 SPAT system &ldquo;TSofM in 2025&rdquo;</strong></p>\r\n<p>In 2025, Taiwan Sound Lab is still dedicated to helping artist and creators teams fulfilling their imaginations about 3D and Spatialized sound works. To quickly get into the powerful spatialization tool &ldquo;Spat5&rdquo;, Taiwan Sound Lab systematize Spat5 objects into a interfaced &ldquo;SPAT&rdquo; engine &ldquo;TSofM&rdquo;.</p>\r\n<p>The full name of &ldquo;TSofM&rdquo; is &ldquo; The Spatialization SPAT Operation for Max&rdquo;, which is came from and based on its previous model &ldquo;TSofL&rdquo;. Now, &ldquo;TSofM SPAT&rdquo; could not only be used on Ableton Live. With OSCar plug-in, &ldquo;TSofM SPAT&rdquo; could be used on every DAWs that supports OSCar plug-in, and remote control to the DAW simultaneously.</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/fee5d64405d98473e27d519e3ac4e822.png\" width=\"1012\" height=\"693\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a558feeb102f902d1b361a230f99427c.png\" width=\"1190\" height=\"635\" /></p>\r\n<p><strong>2025 C-LAB x NTCAM x Samson Young - Art Installations &amp; Spherical Array Speaker</strong></p>\r\n<p>C-LAB and New Taipei City Art Meusiem invited artist Samson Young to disign several multi-channel spherical array speaker as art and sound installations for a exhibition.&nbsp;</p>\r\n<ul>\r\n<li>5* SAS speakers as installation</li>\r\n<li>Each SAS speaker has 7 surfaces of crescent-shaped arc<span>&nbsp;</span></li>\r\n<li>20* 3 inch speaker on the surface of a single crescent-shaped arc and 1* 5 inch speaker</li>\r\n</ul>\r\n<p>PAR lights, BWS lights are used as lamp installations, and this space creates a special sound atmosphere using<span>&nbsp;</span><strong><span>&nbsp;</span></strong>subwoofers phase cancelling,<strong><span>&nbsp;</span></strong>Both will<strong><span>&nbsp;</span></strong>interact with audiences using detective sensors, with real-time delay calculation.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;<strong>&ldquo;Ocean Data Voyage&rdquo;</strong><br />From Big Data to Cross-Pacific Ensemble</p>\r\n<p>The concert features an Internet ensemble of musicians in Taipei and California. The signals that they send across the ocean will be using signals from the ocean itself. The ocean's physical state monitored by computer provides the performers with the data they'll play, for example, tides, temperature, chemistry and the atmosphere above. All of the compositions on the program will feature \"big data\" put to music in a process called \"sonification\" and we've formed the largest sonification \"big band\" ever to hit the stage for this show. &nbsp;- Chris Chafe</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e839634bd84f10241369e8115a605127.jpg\" max-width=\"3226\" max-height=\"2147\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e4471268ce826a6fbb0c83b0a69ea1ec.jpg\" max-width=\"3224\" max-height=\"2145\" /></p>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<p><strong>Multimedia Dance Theater &ldquo;OVERWRITING&rdquo;</strong></p>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<p>&ldquo;Through the layered interweaving of dance and sound, it captures the fleeting shifts of thoughts in moments.&rdquo;</p>\r\n<p>Taiwanese theater director Ching-Hsin Hsiao, using 3-9-3 speaker and ambisonic microphone as sources of creative inspiration, attempting to redefine the physical characteristics of these devices and explore their application in aesthetic creation. Strive to uncover the balance between modern technological artistry&mdash;reason and emotion; inspiration and reflexive action.</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<p>Theater Director / Ching-Hsin HSIAO<br />Sound and Music Design / Hsien-Te HSIE<br />Dancer / Chu-Ying KU<br />Lighting Designer / Han-Sheng LIN</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 2751,
                "name": "393-Speaker",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1036,
                "name": "DICY2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2752,
                "name": "Ircam forum",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2750,
                "name": "Jacktrip",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 39542,
            "forum_user": {
                "id": 39488,
                "user": 39542,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/327006225_875940203643203_1455195321353496755_n.jpg",
                "avatar_url": "/media/cache/03/82/03821466af8e5260cab8db7be3b2db84.jpg",
                "biography": "",
                "date_modified": "2026-03-05T02:59:10.479070+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 452,
                        "forum_user": 39488,
                        "date_start": "2023-06-16",
                        "date_end": "2026-10-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 73,
                                "membership": 452
                            },
                            {
                                "id": 196,
                                "membership": 452
                            },
                            {
                                "id": 216,
                                "membership": 452
                            },
                            {
                                "id": 766,
                                "membership": 452
                            },
                            {
                                "id": 1159,
                                "membership": 452
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "tslclab",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 132,
                    "user": 39542,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "session-c-lab-focusing-2025-by-jing-shiuan-tsang-chia-hui-chen",
        "pk": 3357,
        "published": true,
        "publish_date": "2025-03-14T09:07:12+01:00"
    },
    {
        "title": "Netgear Customer Support: Reliable Help for Seamless Networking",
        "description": "Netgear Customer Support provides assistance for setting up, troubleshooting, and maintaining Netgear networking devices like routers and extenders. It helps users resolve connectivity issues, improve performance, and ensure network security, making it easier to manage a stable and reliable internet connection.",
        "content": "<p>In today&rsquo;s connected world, having a stable and secure internet connection is essential, and that&rsquo;s where <strong>Netgear Customer Support</strong> plays a vital role. Whether you are setting up a new router, configuring advanced settings, or dealing with unexpected connectivity issues, Netgear Customer Support provides the assistance needed to keep your network running efficiently. It is designed to help both beginners and experienced users manage their devices with ease.</p>\n<p>One of the key benefits of <a href=\"https://techsupporthub.support/netgear-customer-service/\">Netgear Customer Support </a>is its guidance during the installation and setup process. Many users face challenges when connecting routers, modems, or extenders for the first time. With proper support, these devices can be configured quickly and correctly, ensuring optimal performance from the start. The service also helps users understand features like parental controls, guest networks, and security settings.</p>\n<p>Another important aspect of Netgear Customer Support is troubleshooting. Network disruptions, slow speeds, or connection drops can be frustrating, but with expert assistance, these issues can be identified and resolved efficiently. The support team provides step-by-step solutions to fix both software and hardware-related problems, minimizing downtime and improving user experience.</p>\n<p>Security is also a major focus of Netgear Customer Support. With increasing online threats, users need to ensure their networks are protected. Support services offer advice on firmware updates, password management, and advanced security configurations to safeguard personal and professional data.</p>\n<p>Additionally, Netgear Customer Support helps with maintenance and updates. Keeping devices up to date is crucial for performance and security, and users can get help with firmware upgrades and regular system checks. In cases of hardware malfunction, the support service also guides users on repair or replacement options.</p>\n<p>Overall, Netgear Customer Support serves as a dependable resource for anyone using Netgear products. From setup to troubleshooting and security, it ensures that users can enjoy a smooth, safe, and uninterrupted networking experience.</p>",
        "topics": [],
        "user": {
            "pk": 166321,
            "forum_user": {
                "id": 166085,
                "user": 166321,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f72e6711c77973b1d4be53e4d52d9b47?s=120&d=retro",
                "biography": "Netgear Customer Support is a service designed to help users set up, manage, and troubleshoot their Netgear networking devices such as routers, modems, and extenders. It provides assistance with installation, fixing connectivity issues, improving network performance, and ensuring device security. The support also includes guidance for updates, maintenance, and resolving hardware problems, offering users reliable help to keep their network running smoothly.",
                "date_modified": "2026-04-01T09:10:09.060496+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "harlanbixby",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "netgear-customer-support-reliable-help-for-seamless-networking",
        "pk": 4568,
        "published": false,
        "publish_date": "2026-04-01T09:13:26.589684+02:00"
    },
    {
        "title": "Testing",
        "description": "I have to create something in order to get to the next step on the Forum",
        "content": "",
        "topics": [],
        "user": {
            "pk": 7874,
            "forum_user": {
                "id": 7871,
                "user": 7874,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1fade1ad71e4cffa3600e04b0aa0b834?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-05-29T16:53:41.705804+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Kit",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "testing",
        "pk": 1003,
        "published": false,
        "publish_date": "2021-11-19T22:07:02.570517+01:00"
    },
    {
        "title": "Keynote : Recent digital failures open amazing perspectives - Nicolas Henchoz",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris",
        "content": "<p><span>The failures of augmented glasses and 3DTV, the setback of metaverse and NFT might be a good news, if you haven&rsquo;t invested in those technologies. Why? Because they generate a strong signal to move beyond tech driven innovation and ephemeral success. There is now room for projects considering in a more serious way meaning, human perception and adoption. It brings more for humans, for the planet, but also for business through sustainable revenues. How can such seducing perspectives can been turned into effective impact? We&rsquo;ll look at major concepts, with some recent and upcoming projects. Results show that sound should take a bigger role in this vision, according to its impact. This keynote will end with a major announcement related to SIGGRAPH 2024, the world leading conference on Computer Graphics and Interactive techniques. Maybe an opportunity to engage in a different way?</span></p>",
        "topics": [],
        "user": {
            "pk": 38341,
            "forum_user": {
                "id": 38291,
                "user": 38341,
                "first_name": "Nicolas",
                "last_name": "Henchoz",
                "avatar": "https://forum.ircam.fr/media/avatars/10_NICOLAS_HENCHOZ__YVES_LERESCHE_Y1030997_B.jpg",
                "avatar_url": "/media/cache/bf/69/bf697932b5fabe3c401d704aff303171.jpg",
                "biography": "Nicolas Henchoz is the founding director of the EPFL+ECAL Lab, the Design Research Centre of the Ecole Polytechnique Federale de Lausanne, created in collaboration with the ECAL (University of Art and Design, Lausanne). He is the Art Papers Chair of Siggraph 2023, the world leading conference in Computer graphics and interaction techniques. Engineer, researcher, art director, manager, he has developed a unique vision of innovation, blending cultural creativity, scientific practices and human observation to foster sustainable adoption. His projects have led to many academic contributions, awards and implemented solutions in cultural institutions, media, health, social dynamics, food and environment. Nicolas Henchoz is a visiting professor at the Politecnico di Milano. He has curated more than 30 exhibitions in institutions like the American Institute of Architecture (NYC), the Royal College of Art (London), the Musée des Arts décoratifs in Paris or Harvard University. He’s the co-founder of GAMI Global Alliance for Media Innovation and has been distinguished Chevalier des Arts et des Lettres by French minister for culture and communication.",
                "date_modified": "2023-02-07T06:23:59+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nhenchoz",
            "first_name": "Nicolas",
            "last_name": "Henchoz",
            "bookmarks": []
        },
        "slug": "recent-digital-failures-open-amazing-perspectives-nicolas-henchoz",
        "pk": 2146,
        "published": true,
        "publish_date": "2023-03-16T14:32:00+01:00"
    },
    {
        "title": "Rocky Balboa Tiger Jacket With Champion Spirit And Bold Style",
        "description": "The Rocky Balboa Tiger Jacket is inspired by the legendary boxer’s fearless spirit, featuring a bold tiger design that symbolizes strength and determination",
        "content": "<p><span style=\"\">Have you ever slipped into a jacket that doesn't just cover you up, but wraps around your whole damn story? Not some flimsy thing from a rack, but one that feels like it's got your back&mdash;literally. We're talking jackets here, man, those unsung heroes of our wardrobes that go way beyond keeping the chill off. They shield us from the world's sharp edges, pump up our confidence when we're faking it till we make it, and scream our inner worlds without us having to say a word. Think about it: in a life full of curveballs, a good jacket is your armor, your swagger, your silent roar.</span></p>\n<h2><strong>The Shield: Jackets as Protection in Life's Storms</strong></h2>\n<p><span style=\"\">Protection isn't just about blocking rain or wind; it's about guarding your core when everything else feels exposed. Jackets have been doing that since humans first threw animal hides over their shoulders. Philosophically, it's like Nietzsche's idea of the eternal recurrence&mdash;facing the same harsh realities over and over, but armored up to affirm life anyway. Relatable? Hell yeah. Remember that first job interview? You're 22, fresh out of whatever passed for school, sitting in a stiff chair across from suits who could eat you alive. Your resume's solid, but doubt's creeping in like fog. You tug at your jacket&mdash;maybe a simple leather number with a tiger stripe vibe echoing that </span><a href=\"https://www.themoviefashion.com/product/sylvester-stallone-rocky-balboa-black-jacket/\"><strong>Rocky Balboa tiger jacket</strong></a><span style=\"\"> grit&mdash;and suddenly, it's not just fabric. It's a barrier. The world can't touch you.</span></p>\n<ul>\n<li style=\"\"><span style=\"\">That supple leather hugs your frame, the kind crafted with precision seams that won't split under pressure.</span></li>\n<li style=\"\"><span style=\"\">Pockets deep enough for your shaking hands, design elements like reinforced elbows whispering \"you got this.\"</span></li>\n<li style=\"\"><span style=\"\">The weight on your shoulders grounds you, turning nerves into quiet resolve.</span></li>\n</ul>\n<p><span style=\"\">I recall a buddy&mdash;let's call him Jax&mdash;who bombed his first big interview without a jacket. Walked in shirt-sleeved, vulnerable. Got ghosted. Next time? He layered up in something reminiscent of a rocky tiger jacket, bold patterns flexing quiet power. Nailed it. Craftsmanship matters here: hides tanned slow, not rushed, so they age with you, not against you. It's protection that evolves, scars and all.</span></p>\n<p><span style=\"\">Winter memories hit different too. Snow crunching under boots, breath fogging the air, that group huddle after a game where everyone's half-frozen but alive. A jacket's your fortress then&mdash;insulated lining trapping heat like a promise kept. I think of my own kid days, piling into one too big for me, feeling invincible as we built forts from drifts. Now? It's the same magic in adult form: a piece where quilting isn't random, but engineered for warmth without bulk. Protection philosophy: jackets remind us we're fragile, but built to endure.</span></p>\n<h2><strong>Confidence Unleashed: Jackets That Make You Strut</strong></h2>\n<p><span style=\"\">Confidence isn't born; it's borrowed sometimes, stitched into what you wear. Jackets crank that dial, turning \"maybe\" into \"watch me.\" It's Aristotelian&mdash;virtue as habit, and a jacket habits you into boldness. Relatable moments? That first job interview again. You're knotting your tie wrong, the mirror is lying to you, but zip up a rocky Balboa tiger jacket-inspired layer, and bam&mdash;shoulders square, chin up. The bold style, those tiger motifs roaring subtle fire, it's like channeling a fighter's unshakeable grit. Craftsmanship shines: hardware that gleams without screaming, leather so supple it moves like your own skin.</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Bold collars that frame your jawline, demanding respect without a word.</span></li>\n<li style=\"\"><span style=\"\">Structured shoulders that broaden your stance, designed based on real-world testing.</span></li>\n<li style=\"\"><span style=\"\">That signature snap&mdash;zip or buttons clicking with purpose, fueling your inner champ.</span></li>\n</ul>\n<p><span style=\"\">Jax again: post-interview glow-up. He started wearing his jacket everywhere&mdash;coffee runs, dates, even gym sessions. Confidence snowballed. \"It's like the jacket's got stories,\" he'd say. And it does&mdash;aged patina from thoughtful tanning, patterns etched deep for fade-proof presence. No wonder </span><strong>The Movie Fashion</strong><span style=\"\"> echoes this: style drawn from cinematic underdogs who jacket up and conquer.</span></p>\n<p><span style=\"\">Late-night walks build that too. Alone, thoughts spiral, but a jacket with rocky tiger jacket swagger changes the script. You catch your reflection in a puddle&mdash;damn, you look ready for anything. Confidence surges; steps quicken. It's the design: asymmetrical zippers for edge, linings that whisper against fabric like a secret boost. Philosophically, it's Camus' absurd hero&mdash;rebelling against meaninglessness by owning your silhouette.</span></p>\n<p><span style=\"\">Winter memories? They're confidence camp. Remember sledding fails turning into epic tales, jacket soaked but holding strong? That resilience transfers. Huddle with friends, your rocky Balboa leather jacket standing out&mdash;bold hues cutting snow's monochrome. Laughter flows easier when you're warm inside and out. Crafted right, with storm flaps and adjustable hems, it's confidence you feel, not flaunt.</span></p>\n<h2><strong>Self-Expression: Jackets as Your Unspoken Manifesto</strong></h2>\n<p><span style=\"\">Jackets don't just cover; they broadcast. They're canvas, manifesto, middle finger to conformity. Sartre's existentialism nails it&mdash;you're condemned to be free, so express it. First job interview? You're not just applying; you're declaring who you are. A jacket with champion spirit, tiger stripes nodding to rocky Balboa tiger jacket legacy, says \"underdog rising.\" Not flashy&mdash;subtle distressing from artisan hands, panels pieced for movement that mirrors your hustle.</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Unique distressing that tells your wearing story, no two alike.</span></li>\n<li style=\"\"><span style=\"\">Pattern plays&mdash;like tiger ferocity tamed into everyday bold style.</span></li>\n<li style=\"\"><span style=\"\">Customizable fits, design celebrating individuality over mass produce.</span></li>\n</ul>\n<p><span style=\"\">Jax customized his: patches from his rides, turning it into pure self. Dates noticed; convos deepened. The Movie Fashion captures this&mdash;icons who jacket their truth.</span></p>\n<p><span style=\"\">Late-night walks? Pure expression. No audience, so it's raw you. A rocky tiger jacket gleams under lamps, motifs alive in shadows. It's therapy&mdash;expressing the wild inside without words. Design depth: embossed textures for tactility, vents for freedom.</span></p>\n<p><span style=\"\">Winter memories shine here. Family pics, you in that standout jacket amid whites&mdash;your spark. Friends tease, but envy it. Craftsmanship: dyes that hold true, threads triple-stitched for life's tugs.</span></p>\n<h2><strong>Craftsmanship and Design: The Soul Stitched In</strong></h2>\n<p><span style=\"\">None of this magic happens without heart in the make. Craftsmanship isn't buzz; it's reverence&mdash;hides selected for character, tanned in small batches to breathe. Design? Purposeful poetry. Take a rocky Balboa leather jacket: patterns evoking tiger power, but scaled for streets. Seams bias-cut for flex, linings moisture-wicking. It's protection because it's thought through; confidence from fit that flatters; expression via details like hidden pockets for secrets.</span></p>\n<p><span style=\"\">Philosophically, Heidegger's \"ready-to-hand\"&mdash;jackets become extensions of being. Relatable: that interview zip-up feels predestined. Late-night drape? Intuitive. Winter wrap? Essential.</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Full-grain leathers that patina beautifully, aging with grace.</span></li>\n<li style=\"\"><span style=\"\">Ergonomic patterns from body scans, not guesswork.</span></li>\n<li style=\"\"><span style=\"\">Sustainable sourcing&mdash;hides saved from waste, dyes eco-bound.</span></li>\n</ul>\n<p><span style=\"\"><a href=\"https://www.themoviefashion.com/\">The Movie Fashion</a> vibes thrive here: movie heroes' jackets weren't props; they were crafted legends.</span></p>\n<h2><strong>Real-Life Moments: Characters Who Jacket Up</strong></h2>\n<p><span style=\"\">Meet Lena: first interview, shaky but jacketed in bold style. Landed the gig, credits her rocky tiger jacket armor.</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Jax's late-nights: rocky Balboa tiger jacket turned wanderer to warrior.</span></li>\n</ul>\n<p><span style=\"\">Trio in winter: jackets clashing colors, memories forged.</span></p>\n<p><span style=\"\">These aren't tales; they're us.</span></p>\n<h2><strong>Final Thoughts</strong></h2>\n<p><span style=\"\">Jackets like the rocky Balboa leather jacket aren't just gear&mdash;they're philosophical lifelines. They protect our vulnerabilities, ignite confidence in doubt's shadow, and let us express the untamed self. From interviews to walks to winters, they're woven into our becoming. Craft and design make it eternal: choose pieces that echo your champion spirit.&nbsp;</span></p>",
        "topics": [
            {
                "id": 4551,
                "name": "clothing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4526,
                "name": "fashion",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4549,
                "name": "jacket",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4550,
                "name": "lifestyle",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166343,
            "forum_user": {
                "id": 166107,
                "user": 166343,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/47a65b8e7b7dfddeb036893e4a48aaf2?s=120&d=retro",
                "biography": "The Rocky Tiger Jacket is inspired by the fearless spirit of a true champion. Featuring a striking tiger design, it represents strength, courage, and determination. Crafted with durable materials, this jacket offers comfort and style, making it perfect for casual wear or fan events.",
                "date_modified": "2026-04-01T14:20:52.533502+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "davidnathon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "rocky-balboa-tiger-jacket-with-champion-spirit-and-bold-style",
        "pk": 4589,
        "published": false,
        "publish_date": "2026-04-03T14:01:34.990025+02:00"
    },
    {
        "title": "L'avenir de l'écoute : Comment nos modes d'écoute vont changer dans un avenir proche - Kwangrae Kim, Prof. Suk-Jun Kim (ACC Sound Lab)",
        "description": "Le projet <Futures de l'écoute> va au-delà de la simple étude du \"son\" ; il s'agit d'une exploration profonde de l'acte humain d'\"écoute\". Il considère l'écoute comme une plateforme complexe où diverses forces se disputent le contrôle de ce que les sons sont entendus, comment ils le sont et pour qui. Il examine de manière critique les pratiques d'écoute historiques et actuelles, en s'interrogeant sur la manière dont elles pourraient évoluer au cours des dix ou vingt prochaines années.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">Pr&eacute;sent&eacute; par: Kwangrae&nbsp;<span>Kim</span><br /><a href=\"https://forum.ircam.fr/profile/rae1101/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\"><a href=\"https://forum.ircam.fr/profile/rae1101/\"><br /></a>Co-pr&eacute;sentateur :<span><span>&nbsp;</span>Prof. Suk-Jun Kim&nbsp;<span>(University of Aberdeen, Scotland, UK)&nbsp;</span><br /><a href=\"https://forum.ircam.fr/profile/reddoorsound/\">Biographie</a></span></p>\r\n<p style=\"text-align: justify;\"><span></span></p>\r\n<p><span><img src=\"/media/uploads/hbhuavqa.jpg\" alt=\"\" width=\"497\" height=\"373\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">ACC Sound Lab est un projet men&eacute; par le National Asian Culture Center (ACC)*, dans le cadre d'une initiative de recherche et de d&eacute;veloppement de contenus interdisciplinaires. Le projet se concentre sur l'expression artistique &agrave; travers le son, en tant que m&eacute;dium, produisant une vari&eacute;t&eacute; de contenus immersifs. Chaque ann&eacute;e, une petite &eacute;quipe d'artistes du son et d'artistes interdisciplinaires m&egrave;ne des enqu&ecirc;tes artistiques &agrave; travers diverses activit&eacute;s de recherche, qui aboutissent &agrave; des compositions, des pr&eacute;sentations publiques et des expositions.</p>\r\n<p style=\"text-align: justify;\">De mai &agrave; d&eacute;cembre 2023, le premier ACC Sound Lab 2023 a lanc&eacute; un projet de recherche artistique intitul&eacute; Futures of Listening. Ce projet pluriannuel explore la nature &eacute;volutive des exp&eacute;riences auditives au cours des deux prochaines d&eacute;cennies. Il consid&egrave;re l'&eacute;coute comme une plateforme complexe o&ugrave; diverses forces luttent pour le contr&ocirc;le de ce qui est entendu, comment et pour qui.&nbsp;Le projet transcende la simple &eacute;tude du \"son\" en soi et vise &agrave; poser une s&eacute;rie de questions sur l'acte d'&eacute;coute, en examinant nos attitudes d'&eacute;coute pass&eacute;es et pr&eacute;sentes et leur contexte, tout en imaginant que de tels traits transformeront nos fa&ccedil;ons d'&eacute;couter. En 2023, l'accent a &eacute;t&eacute; mis sur le paysage sonore urbain de l'Asie, afin d'examiner l'importance socio-&eacute;conomique, culturelle et politique de l'&eacute;coute et d'explorer les futurs possibles de la culture auditive.<br />Dans le cadre de cette recherche - tant au niveau des r&eacute;sultats que des m&eacute;thodes - l'ACC Sound Lab a produit un certain nombre de r&eacute;sultats artistiques sous la forme de performances en direct, de pr&eacute;sentations publiques, de documentation sur les visites de terrain et d'installations sonores, dont la derni&egrave;re est actuellement expos&eacute;e &agrave; l'ACC. Dans cette pr&eacute;sentation, nous discuterons de l'une de ces installations intitul&eacute;e <em>Whispers in the Urban Fabric</em> (Chuchotements dans le tissu urbain).<br />Pour <em>Whispers in the Urban Fabric</em>, qui fait partie de l'exposition intitul&eacute;e \"Urbanscape : Connectivit&eacute; et coexistence\", nous avons construit un mur sonore incurv&eacute; avec un ensemble de 48 haut-parleurs cach&eacute;s derri&egrave;re de fines plaques m&eacute;talliques perfor&eacute;es qui s'&eacute;l&egrave;vent du sol au mur et couvrent le plafond. Cette forme architecturale du mur sonore immerge le public dans les sons enregistr&eacute;s lors des voyages de recherche du ACC Sound Lab dans diverses r&eacute;gions d'Asie. Nous avons utilis&eacute; la r&eacute;volution SPAT de l'IRCAM/FLUX: : Software Engineering's SPAT Revolution pour spatialiser les mouvements des mat&eacute;riaux sonores afin d'offrir au public une rencontre sensorielle multicouche et expansive.&nbsp;</p>\r\n<p><em>Whispers in the Urban Fabric</em> cherche &agrave; explorer comment ces exp&eacute;riences auditives &eacute;clairent les identit&eacute;s sociales, &eacute;conomiques, politiques et &eacute;cologiques des villes asiatiques. Il &eacute;tudie &eacute;galement la coexistence de diverses perspectives et exp&eacute;riences auditives dans ces environnements urbains.<br />Dans cet expos&eacute;, nous pr&eacute;senterons les principales questions de recherche de Futures of Listening et les activit&eacute;s de recherche men&eacute;es par l'&eacute;quipe de l'ACC Sound Lab en 2023. Nous pr&eacute;senterons ensuite Whispers in the Urban Fabric et discuterons de la mani&egrave;re dont SPAT Revolution a &eacute;t&eacute; utilis&eacute; pour r&eacute;aliser cette installation de mur sonore.</p>\r\n<p>*Le National Asian Culture Center (ACC) est une organisation internationale d'&eacute;changes artistiques et culturels situ&eacute;e &agrave; Gwangju, en Cor&eacute;e du Sud.</p>\r\n<p><strong><img src=\"https://forum.ircam.fr/media/uploads/mm3j0vp0.png\" alt=\"\" width=\"457\" height=\"244\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></strong></p>\r\n<p><strong>ACC Sound Lab 2023</strong><br />Directeur/conservateur principal <strong>Kim Jiha</strong><br />Conservateur <strong>Kim Kwangrae</strong><br />Responsable de la recherche <strong>Kim Suk-Jun</strong><br />Chercheurs <strong>Yoon Jiyoung, Jo Yeabon, Cha Mihye</strong><br />Gestionnaires <strong>Park Eunhyun</strong><br />Technicien <strong>Jo Yeabon</strong><br />Production et assistance technique <strong>Ahn Jae-Young</strong><br />Accueillie/organis&eacute;e par le<strong> Centre national de la culture asiatique, Gwangju</strong></p>\r\n<p><strong></strong></p>\r\n<p style=\"text-align: justify;\"><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1767,
                "name": "48channel",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1769,
                "name": "ACC Sound Lab",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 619,
                "name": "Immersivesound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1768,
                "name": "sound research",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17978,
            "forum_user": {
                "id": 17972,
                "user": 17978,
                "first_name": "Kwangrae",
                "last_name": "Kim",
                "avatar": "https://forum.ircam.fr/media/avatars/raee_Zazl8yg.png",
                "avatar_url": "/media/cache/88/ed/88ed88598f6bf53bf01126edaf956bfb.jpg",
                "biography": "Kwangrae Kim is a composer, researcher, and curator immersed in the dynamic interplay of sound, space, and imagination. His artistic journey is marked by a diversity of compositional methods, including fixed media acousmatic, audio-visual works, soundscape compositions, and live interactive performances that blend visuals, instruments, and installations.\n\nFascinated by the complex ways we perceive sound, Kwangrae's curiosity informs his approach to sound creation. He designs electroacoustic compositions in various formats, from fixed media to live performances and interactive installations. His work often includes multichannel sound systems, allowing for intricate explorations of sound within spatial dimensions, deepening his grasp of spatial sound and offering listeners a unique, immersive experience.\n\nKwangrae's works have been showcased at diverse conferences and festivals worldwide, including the sonADA 2016 (UK), NYC EMF 2014 & 2017 (US), L'Autre Musique 2018 (France), and ICMC 2019 (US).\n\nCurrently, he leads sound art exploration at the National Asian Culture Centre's ACC Sound Lab, pioneering research and exhibitions that challenge and expand the horizons of sound art.",
                "date_modified": "2025-10-20T16:11:12.300295+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1001,
                        "forum_user": 17972,
                        "date_start": "2024-11-13",
                        "date_end": "2025-11-13",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "rae1101",
            "first_name": "Kwangrae",
            "last_name": "Kim",
            "bookmarks": []
        },
        "slug": "futures-of-listening-how-our-ways-of-listening-will-change-in-the-near-future-acc-sound-lab",
        "pk": 2727,
        "published": true,
        "publish_date": "2024-02-13T16:54:06+01:00"
    },
    {
        "title": "The making of the exhibition \"Fabriques du son numérique\" by François-Xavier Féron",
        "description": "The making of the exhibition Fabriques du son numérique, or how sound synthesis was developed in France during the 1970s by François-Xavier Féron\r\n\r\nWhere, when, and how were the first computer music pieces produced? Who are the pioneers of French-style digital sound? As part of the RAMHO project (An Oral History of Musical Research and Musical Acoustics in France), which is based on both archival research and the collection of oral testimonies, the exhibition Fabriques du son numérique aims to retrace the developement of computer music in France during the 1970s.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>In the second half of the twentieth century, musical practices were totally transformed by the emergence and democratisation of techniques in the analysing, synthesising and processing of sound, and by increased scientific knowledge on sound phenomenon. Musical acoustics, computer music and what is called, more generally, musical research &ndash;&nbsp;in combination with the integration of new technologies into music&nbsp;&ndash; have led scientists and musicians to collaborate within new kinds of institutions and to develop new tools, knowledge and expertise. The RAMHO project (An Oral History of Musical Research and Musical Acoustics in France) is based on both archival research and the collection of oral testimonies. It focuses on the origins of these research centres &ndash;&nbsp;many of which were created in the 1960s, 1970s and 1980s&nbsp;&ndash; and the way in which the links between science and arts were developed. We therefore focus on developments of digital sound synthesis in France. Where, when, and how were the first computer music pieces produced? Who are the pioneers of French-style digital sound? During this presentation, we will first describe the RAMHO project and give a preliminary assessment. Then, we will show how it led to the exhibition <em>Fabriques du son num&eacute;rique</em> conceived as an extension of the conference <em>Musiques de synth&egrave;se: la </em>French touch<em> des ann&eacute;es 1970</em>, held on June 3 and 4, 2025, at the <em>Maison des Sciences Humaines et sociales Paris Nord</em> as part of Ircam Manifeste festival.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a0158d4de4ee325353e4a3f02498a89c.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 18123,
            "forum_user": {
                "id": 18117,
                "user": 18123,
                "first_name": "François-Xavier",
                "last_name": "Féron",
                "avatar": "https://forum.ircam.fr/media/avatars/FeronLD.jpeg",
                "avatar_url": "/media/cache/53/41/5341b2c69d2c8a39e64acbdd1618bc46.jpg",
                "biography": "François-Xavier Féron has a Master’s degree in musical acoustics and a PhD in musicology (Sorbonne University). Since 2013, he has been a tenured researcher at the French National Centre for Scientific Research (CNRS). He is actually member of the Analysis of Musical Practices research group at STMS-Ircam laboratory (Paris) and collaborator of the Centre for Interdisciplinary Research in Music Media and Technology (Montreal). His research deals with various contemporary musical practices, focusing on creative processes, performances, and analysis of works and (psycho-)acoustic phenomena. He actually coordinates the RAMHO project which combines oral history and archives studies for tracing the history of musical acoustics and musical research in France during the second half of the XXth century. He is also co-editor in chief of Ircam ANALYSES database.",
                "date_modified": "2026-02-17T09:58:35.754694+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 220,
                        "forum_user": 18117,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 175,
                                "membership": 220
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "feron",
            "first_name": "François-Xavier",
            "last_name": "Féron",
            "bookmarks": []
        },
        "slug": "the-making-of-the-exhibition-fabriques-du-son-numerique-or-how-sound-synthesis-was-developed-in-france-during-the-1970s",
        "pk": 4354,
        "published": true,
        "publish_date": "2026-02-13T17:04:57+01:00"
    },
    {
        "title": "Hallucination in NOVATRON by Deyu Zeng (China)",
        "description": "Hallucination in NOVATRON is an immersive project that\r\nexplores AI hallucination in deepfake technologies and its\r\nimpact on our understanding of reality. Through digital \r\nstorytelling, performance, and media experiments, the project\r\nexplores how AI hallucination changes the way we see and\r\nunderstand the world.\r\n\r\nBy creating fictional AI-driven roles that interact with real\r\npeople, Hallucination in NOVATRON questions the growing influence of AI hallucination. It highlights the fine line\r\nbetween simulation and deception, inviting audiences to\r\nexperience a world where it becomes difficult to tell what is\r\nreal and what is not.",
        "content": "<p></p>\r\n<p style=\"text-align: center;\"><strong>Hallucination in NOVATRON: Speculative Design and Machine Learning-driven Immersive Installations</strong><br /><em>IRCAM Forum Workshops 2025, Taipei</em></p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c96862548eb5f9448fefac0074aaeae0.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>This project examines the phenomenon of AI hallucination&mdash;when artificial intelligence generates fabricated or misleading content&mdash;through the lens of deepfake technologies. With the mainstream adoption of large language models (LLMs) and synthetic media, hallucination has moved from a technical limitation to a pressing societal challenge, shaping how we experience trust, deception, and truth.</p>\r\n<p><em>Hallucination in NOVATRON</em> creates a fictional AI hub, &ldquo;NOVATRON,&rdquo; as a speculative platform where simulation and deception intertwine. Research traced the conceptual evolution of hallucination from early neural network creativity to LLM-driven mainstream recognition, focusing on its sensory dimensions: visual (deepfake video manipulation), auditory (voice cloning and synthetic speech), and verbal (AI-generated narratives). These forms reveal how deepfake technologies amplify hallucination, intensifying fraud, eroding social trust, and complicating our ability to distinguish fact from fiction.</p>\r\n<p>Methodologically, the project combined performance-based interviews, symbolic props (masks, prosthetics), and role-play analysis to explore lived experiences of deception in professional and everyday contexts. The design process materialized in a multi-layered installation: a NOVATRON website featuring fictional staff profiles, printed correspondence that blurred digital and tangible communication, and AI-synthesized interview videos in which non-English speakers were voiced through AI-generated translations.</p>\r\n<p>The final exhibition invites audiences into this staged hallucination: navigating the website, listening to synthetic voices, reading deceptive letters, and watching uncanny interviews. By orchestrating these encounters, <em>Hallucination in NOVATRON</em> asks participants to confront the fragile boundary between simulation and deception, highlighting how AI hallucination destabilizes shared reality and challenges the foundations of social trust in an AI-mediated future.</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 1249,
                "name": "Immersive Installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3486,
                "name": "Machine Learning-driven Animation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 815,
                "name": "soundscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3487,
                "name": "Speculative Design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 133405,
            "forum_user": {
                "id": 133230,
                "user": 133405,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1dafa33f1eef60035539184dd24c6eea?s=120&d=retro",
                "biography": "Jacob(Deyu) Zeng is a multimedia artist whose practice spans moving image, performance, sound, and interactive\nenvironments. His work investigates how political, technological, and institutional systems shape space, and\nhow individuals encounter power through everyday environments and infrastructures. Through site-based\nmethods — including walking, sensing, and interaction — he develops the Intersectionality project, translating\nresearch into spatial and experiential forms.\n\nJacob holds a BA from the University of the Arts London and is currently pursuing an MA at the Royal College of\nArt. His work has been presented at institutions including IRCAM Forum and the London Science Museum.",
                "date_modified": "2026-03-30T02:38:12.368946+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "zengdeyu2003",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3755,
                    "user": 133405,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "hallucination-in-novatron",
        "pk": 3755,
        "published": true,
        "publish_date": "2025-10-03T10:46:53+02:00"
    },
    {
        "title": "Bridging Audio, Visual, and AI Domains using the Elixir language, by Thibaut Barrère",
        "description": "This research project focuses on developing an integrated framework for music technology, leveraging Elixir to create a comprehensive ecosystem for audio processing, visual representation, and artificial intelligence integration.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p>Presented by : Thibaut Barr&egrave;re</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/thibautbarrere/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p><span>This research project focuses on developing an integrated framework for music technology, leveraging Elixir to create a comprehensive ecosystem for audio processing, visual representation, and artificial intelligence integration.</span></p>\r\n<p><span>As an independent developer, I am designing and implementing a consolidated system aiming at handling diverse aspects of music creation, performance, and analysis within a single technological stack.</span></p>\r\n<p><span>The framework, built on Elixir, interfaces seamlessly with MIDI and audio capabilities, allowing for soft real-time audio stream creation, MIDI event handling, and multi-sound card support. It allows for live, reactive interfaces, both web-based and non-web, including SVG piano rolls and other dynamic graphical representations of musical data. The system's hot-reloading capabilities facilitate rapid prototyping and live performances.</span></p>\r\n<p><span><img alt=\"Controlling DMX lightning from Elixir\" src=\"https://forum.ircam.fr/media/uploads/user/5a1cfdc0f78ba400a1822b822d8cfd37.png\" /></span></p>\r\n<p><span>To enhance performance, the framework interfaces with C and Rust, enabling efficient utilization of drivers and specialized interfaces. It leverages Elixir and Erlang's native clustering abilities to interconnect multiple nodes on a network, enabling distributed processing and synchronization across devices.</span></p>\r\n<p><span>The integration of Large Language Models (LLMs) within the same technological stack allows for the extraction of musical knowledge, scores, and theory from AI models. The framework extends beyond audio, incorporating light projections via DMX protocols and implementing live image recognition and video processing using Elixir libraries.</span></p>\r\n<p><span>This project brings together diverse capabilities within a single, coherent system, allowing for personal exploration of the intersections between music, technology, and artificial intelligence.</span></p>\r\n<p><span>As development progresses, this integrated approach may lead to new insights and creative possibilities in digital music creation and performance.</span></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 87543,
            "forum_user": {
                "id": 87440,
                "user": 87543,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/91eb330fb36d1e03c856574dfb77d2bc?s=120&d=retro",
                "biography": "I am an independent consultant (programmer, data-engineer, dev-ops / architect, advisor) & computer music researcher. I started tinkering with code and music in my childhood, and never really stopped ;-)",
                "date_modified": "2025-03-23T18:41:34.556957+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "thibautbarrere",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "bridging-audio-visual-and-ai-domains-using-the-elixir-language-by-thibaut-barrere",
        "pk": 3283,
        "published": true,
        "publish_date": "2025-02-12T23:25:04+01:00"
    },
    {
        "title": "synaptic._null: An Experimental Audiovisual Performance on Perceptual Collapse by Arcky Tang",
        "description": "An exploration of perception, logic breakdown, and drifting consciousness through real-time audiovisual performance, combining Ableton Live and TouchDesigner in an immersive setting.",
        "content": "<p></p>\r\n<p><em>synaptic._null</em> is an experimental audiovisual performance created by Arcky Tang, an artist working between sound, image, and real-time systems. The project investigates the collapse of perception and the instability of consciousness, drawing from philosophical frameworks such as Baudrillard&rsquo;s <em>Simulacra and Simulation</em> and Morton&rsquo;s concept of hyperobjects, alongside phenomenology and quantum cognition.</p>\r\n<p>The work unfolds as a live performance in three nonlinear parts&mdash;<strong>Simulacrum, Paradox, and Consciousness</strong>&mdash;each blurring the boundary between sensory input and cognitive expectation. Rather than following a linear narrative, the performance operates as a generative system where sound and image interact, dissolve, and reassemble in unpredictable ways.</p>\r\n<p>Technically, the piece is constructed through a feedback loop between <strong>Ableton Live</strong> (handling modular sound design, looping, and experimental structures) and <strong>TouchDesigner</strong> (driving generative and audio-reactive visuals). Signals are exchanged in real time via MIDI/OSC, allowing each domain to destabilize and reshape the other. This system reflects the project&rsquo;s conceptual interest in perceptual collapse: just as one thinks they recognize a pattern, it dissolves into noise, only to re-emerge in a different form.</p>\r\n<p><em>synaptic._null</em> was presented at <strong>Outernet London</strong> as part of the RCA Digital Direction graduation program, staged across immersive projection surfaces with multichannel sound. The performance emphasizes immediacy and ephemerality&mdash;its form is never fixed, and each iteration becomes a unique drift through audiovisual instability.</p>\r\n<p>At its core, the project aims to question how we construct meaning under conditions of uncertainty. By placing the audience inside a constantly shifting sensory environment, <em>synaptic._null</em> invites participants to experience not clarity, but <strong>the beauty of collapse itself&mdash;where perception becomes porous, and consciousness drifts beyond stable form</strong>.</p>",
        "topics": [
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 850,
                "name": "experimental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 146,
                "name": "Perception",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 126555,
            "forum_user": {
                "id": 126388,
                "user": 126555,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/2_fDgLESh.jpg",
                "avatar_url": "/media/cache/93/57/935711d78c8c07a43b4eb327169e2aaa.jpg",
                "biography": "Arcky is an audiovisual artist working at the intersection of surreal abstraction, consciousness exploration, and sensory experimentation. His practice expands the boundaries of perception and dissolves the logic of known reality—seeking transcendence through the collapse of structure and the intuitive resonance of sound and light.\n\nDeeply influenced by phenomenology, stream-of-consciousness aesthetics, and meditative states, Arcky’s work embraces dream logic, glitch textures, and ephemeral visuals to create immersive, improvisational performances. These performances become portals for nonlinear storytelling and cognitive dissonance, inviting the audience into a fluid space between detachment and empathy, where perception folds and time distorts.\n\n\nArcky believes the world is an absurd illusion. Yet, by surrendering to the unknown and embracing the instability of logic, one may unlock new dimensions of being. For him, creation is not about control—but about letting go, listening deeply, and finding meaning in the unseen.",
                "date_modified": "2025-10-12T14:22:06.449888+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "arckytang",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "synaptic_null-an-experimental-audiovisual-performance-on-perceptual-collapse",
        "pk": 3778,
        "published": false,
        "publish_date": "2025-10-07T14:33:34+02:00"
    },
    {
        "title": "A Complete Guide to Orbi Log In and Network Management",
        "description": "Orbi log in is the process of accessing the Orbi router’s admin panel to control and customize network settings. It enables users to manage Wi-Fi connections, update security options, and monitor connected devices for a smoother and more secure internet experience.",
        "content": "<p><a href=\"https://orbiillogin.com/\">Orbi log in</a> is an essential process for anyone using an Orbi router system to manage their home or office network. It provides access to the router&rsquo;s administrative dashboard, where users can control various settings and ensure their internet connection runs smoothly.</p>\n<p><strong>What is Orbi Log In?</strong><br>Orbi log in refers to the method of accessing the router&rsquo;s control panel through a web browser or mobile app. Once logged in, users can view and modify network configurations, making it easier to personalize and secure their Wi-Fi environment.</p>\n<p><strong>Why Orbi Log In is Important</strong><br>Logging into your Orbi system allows you to take full control of your network. You can update your Wi-Fi name and password, monitor connected devices, set parental controls, and improve overall security. Regular access helps keep your network optimized and protected from unauthorized users.</p>\n<p><strong>How to Access Orbi Log In</strong><br>To perform an Orbi log in, users typically open a web browser and enter the designated address in the search bar. After that, they enter the required login credentials, such as username and password, to access the admin dashboard. From there, all network settings become available.</p>\n<p><strong>Features Available After Orbi Log In</strong><br>Once logged in, users can:</p>\n<ul>\n<li>Change Wi-Fi settings and passwords</li>\n<li>Update router firmware</li>\n<li>Monitor internet usage and connected devices</li>\n<li>Set up guest networks</li>\n<li>Enhance network security settings</li>\n</ul>\n<p><strong>Tips for a Secure Orbi Log In</strong><br>It is important to use a strong password and update it regularly. Avoid sharing login details and ensure your router firmware is always up to date. These steps help maintain a safe and reliable network.</p>\n<p><strong>Conclusion</strong><br>Orbi log in is a simple yet powerful way to manage your router and maintain a secure internet connection. By understanding how it works and using its features effectively, users can enjoy a seamless and well-controlled networking experience.</p>",
        "topics": [],
        "user": {
            "pk": 166328,
            "forum_user": {
                "id": 166092,
                "user": 166328,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9e2eb12d693f81eb97e4d1da7d504150?s=120&d=retro",
                "biography": "Orbi log in refers to the process of accessing the admin panel of an Orbi router system. It allows users to manage their Wi-Fi network, change settings, update firmware, and monitor connected devices. This login is usually done through a web browser using a local web address or dedicated app.",
                "date_modified": "2026-04-01T10:24:15.974342+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "davidbrown201960",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "a-complete-guide-to-orbi-log-in-and-network-management",
        "pk": 4569,
        "published": false,
        "publish_date": "2026-04-01T10:26:31.148835+02:00"
    },
    {
        "title": "Artists & Engineers: How We Communicate – Behind the Scenes of Sound Art (Installation/Theatre) with Multichannel Audio by Miyu Hosoi",
        "description": "This artist talk by sound artist Miyu Hosoi will explore the process of creating large-scale sound installations and stage productions. She will discuss what kinds of skilled engineers artists require and how artists and engineers collaborate to design and build an integrated sound system.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p><img alt=\"Anechoic Chamber: ONOSOKKI /  Photo: Eito Takahashi(TWOTONE) \" src=\"https://forum.ircam.fr/media/uploads/user/4011a3a69ffc71aeae190596de3dd7d0.jpg\" /></p>\r\n<p>Presented by : Hosoi Miyu</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/miyuhosoi/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>The primary goal of this presentation is to share a example of how artists and engineers communicate when creating works that incorporate technology. Additionally, it aims to facilitate more dynamic and meaningful conversations between artists and engineers. As part of this discussion, I will also introduce the systems behind some of my multi-channel audio installations/theatre pieces.</p>\r\n<p>This presentation approaches the topic from a slightly different perspective than most, but I believe it is a crucial and necessary discussion. Of course, it would be impossible for me to speak on behalf of all artists regarding their relationships with engineers. Instead, I will share my personal experiences.</p>\r\n<p><br /><img alt=\"IR Recording in a cave with Yamaguchi Center for Arts and Media[YCAM]\" src=\"https://forum.ircam.fr/media/uploads/user/a0b1aaf7a1b1baf7c902b59ae9062859.jpg\" width=\"1196\" height=\"897\" /></p>\r\n<p>The inspiration for this presentation comes from my recent experiences receiving collaboration requests from various engineers, companies, and research institutions. While I am grateful for these opportunities, I have increasingly noticed gaps&mdash;sometimes small, sometimes significant&mdash;between artists and engineers when we start communicating. This is particularly evident among engineers who have not previously worked on artistic projects.</p>\r\n<p>That said, we as artists must never forget to appreciate the fact that these engineers are interested in engaging with art. Many works could not have come to life without their contributions.</p>\r\n<p>These gaps can manifest in various ways&mdash;differences in the expected quality, the balance between concept and technology, and many other aspects. Since every artist and engineer has their own unique values and perspectives, it is difficult to define these gaps in a single, clear-cut way. For instance, there can be fundamental differences in whether technology is being used to maximize the artistic concept or whether the artwork itself is being used as a means for technological development.</p>\r\n<p><img alt=\"with Yamaguchi Center for Arts and Media[YCAM]\" src=\"https://forum.ircam.fr/media/uploads/user/88c060ad422828b52e400663caa5ef35.jpg\" width=\"693\" height=\"459\" />&nbsp; &nbsp; &nbsp;<img alt=\"with Aichi Arts Center\" src=\"https://forum.ircam.fr/media/uploads/user/e2348b68fb9f5d0f8fcd9d192f3ceb8d.jpg\" width=\"611\" height=\"458\" /></p>\r\n<p>Institutions like IRCAM, where dedicated teams exist to support artistic production, are quite rare. Even highly skilled engineers may not always have opportunities to engage in the process of bringing an artistic concept to life. And artists also tend to be demanding, sometimes unreasonably so.</p>\r\n<p>However, despite these gaps, I firmly believe that we as artists must not stop engaging in conversations with engineers. For artists who heavily rely on technology, gaining interest from engineers and research institutions is essential, as continued artistic development would be impossible without their support and understanding.</p>\r\n<p>At the same time, I believe that we, as artists, also need to clearly communicate how we think and approach our creative processes.</p>\r\n<p><img alt=\"with my team @ The National Museum of Emerging Science and Innovation\" src=\"https://forum.ircam.fr/media/uploads/user/c082e088a00317492434966305ea51be.jpg\" width=\"1004\" height=\"753\" /></p>\r\n<p>It would be a great pleasure if, in the future, this presentation leads to new collaborations with those who attend.</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2629,
                "name": "production",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2628,
                "name": "theatre piece",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 87770,
            "forum_user": {
                "id": 87666,
                "user": 87770,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/MiyuHosoi_01_1200.jpg",
                "avatar_url": "/media/cache/15/cc/15cc175678389c17fb6b7a860b12e54b.jpg",
                "biography": "Born in 1993, based in Tokyo, sound artist Miyu HOSOI creates works featuring multiple recordings of her own voice, sound installations using multi-channel sound systems, outdoor installa-tions, performing arts productions, focusing on the way sound transforms the percep-tion of space and situations.\nHer works have been presented at Barbican Centre London, Tokyo International Haneda Airport, Tokyo Metropoli-tan Hibiya Park, Nagano Prefectural Art Museum, Audio Engineering Society[AES], NTT InterCommunication Center[ICC] Anechoic Room, Yamaguchi Center for Arts and Media[YCAM], Aichi Arts Center and more.  In 2024, on stage as a performer at La Biennale di Venezia – Danza 2024, for the theater piece “Tangent” by Shiro Takatani(DUMB TYPE).",
                "date_modified": "2025-11-04T18:05:33.476931+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "miyuhosoi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-behind-the-scenes-of-sound-art-worksinstallationtheatre-piece-using-multichannel-audio-by-miyu-hosoi",
        "pk": 3281,
        "published": true,
        "publish_date": "2025-02-12T03:42:06+01:00"
    },
    {
        "title": "Eternal Terra Ear (Visceral Sonic Oyster) x Center for the Future Open Call for Ocean Action by Yidi Wang and Jessica Newfield",
        "description": "An eco-intelligent device with spatialized sonic features simulating oyster and marine biomass ecosystem, movement, and vibrations as the new energy/sound highlighting bio-synaesthesia.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p>A synthetic future interaction between the visceral body, floating quantum, electricity, and sound will be inseparable in the spatialized biosphere. Its 3D visual representation in a room of more than space but an inner pattern where synthetic complexity grows points to the kind of science that&rsquo;s led by nature itself and citizens who are intimate with them such as in Hong Kong aquaculture, which will be mapped as a systematic indicator as another layer of the spatial installation.&nbsp;<br />This project aims to echo a planetary digital fictional infrastructural future for places of complexity that philosophical think tanks could address in this era leading the way for industry, politics, art creation, and even regional planning and governance extending from Hong Kong&rsquo;s marine environment. Through realizing in a musical spatial installation form, a visceral intimacy with a non-human creature from the marine eco-system will be touched on, contrasting with recorded queer human body's movement, sound, and spatial composition one generates in an imaginative murmuring transforming the fictional into the possible societal bodily norms. An intellectual and impactful link with environmental NGOs is expected throughout this creating process.</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/lazulioa/\" target=\"_blank\">Presented by : Yidi Wang</a>, Jessica Newfield</p>\r\n<p><img alt=\"Landscape Synaesthesia\" src=\"https://forum.ircam.fr/media/uploads/user/142dbced1b3fe7f98415593b81e2d0ce.jpg\" width=\"744\" height=\"558\" /></p>",
        "topics": [
            {
                "id": 2472,
                "name": "Biomass",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2473,
                "name": "Biosphere",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2470,
                "name": "Eco-Intelligence",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2471,
                "name": "Planetary Sonic Device",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2469,
                "name": "Synaesthesia",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 56658,
            "forum_user": {
                "id": 56595,
                "user": 56658,
                "first_name": "Yidi",
                "last_name": "Wang",
                "avatar": "https://forum.ircam.fr/media/avatars/photo_wang.jpg",
                "avatar_url": "/media/cache/35/8c/358c7809fd7944dabb045c1837221255.jpg",
                "biography": "Social architect and infrastructural strategist. She is the founder of Eternal Terra Ear, and a public speaker for ETE’s post-humanitarian cultivation at the IRCAM Forum Center Pompidou, European Citizen Science Association, and UN Ocean Decade conference, etc. She is a working group member at Global Network on Culture Heritage Conservation Under Climate Change Action at COST Association, European Cooperation in Science and Technology. She was a research participant at Sciences Po Paris DSA Sprint of Open Institute for Digital Transformation, and youth delegate of the World Bank summit 2025.",
                "date_modified": "2026-03-07T13:06:15.530137+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lazulioa",
            "first_name": "Yidi",
            "last_name": "Wang",
            "bookmarks": []
        },
        "slug": "visceral-oyster-eco-intelligent-planetary-infrastructure",
        "pk": 3171,
        "published": true,
        "publish_date": "2024-12-22T05:42:04+01:00"
    },
    {
        "title": "Home Staging Melbourne – Professional Property Styling Services",
        "description": "Enhance your property’s appeal with expert Home Staging Melbourne services. Sell faster and achieve higher value with professional styling and modern décor solutions.",
        "content": "<h3>Home Staging Melbourne &ndash; Transform Your Property for a Faster Sale</h3>\n<p>In today&rsquo;s competitive real estate market, <a href=\"https://thestylecast.com.au/home-staging-melbourne/\" title=\"Home Staging Melbourne\"><strong>Home Staging Melbourne</strong></a> has become an essential strategy for homeowners and property sellers looking to stand out. Professional home staging is not just about decorating&mdash;it is about creating a visually appealing and emotionally engaging environment that attracts buyers and increases the perceived value of your property.</p>\n<p>According to industry insights, staged homes in Melbourne often sell faster and can achieve higher offers because buyers can better visualize themselves living in the space.</p>\n<hr>\n<h3>What is Home Staging?</h3>\n<p>Home staging is the process of preparing a property for sale by enhancing its appearance through furniture arrangement, d&eacute;cor styling, and space optimization. The goal is to present the home in the best possible light, making it more attractive to potential buyers.</p>\n<p>Professional staging includes:</p>\n<ul>\n<li>Furniture placement and layout optimization</li>\n<li>Decluttering and organizing spaces</li>\n<li>Adding modern d&eacute;cor and accessories</li>\n<li>Highlighting key architectural features</li>\n</ul>\n<p>This strategic presentation helps create a strong first impression and boosts buyer interest.</p>\n<hr>\n<h3>Why Choose Home Staging Melbourne?</h3>\n<p>Choosing expert <strong>Home Staging Melbourne</strong> services offers multiple benefits that directly impact your property&rsquo;s sale:</p>\n<h4>1. Faster Sales</h4>\n<p>Staged homes attract more attention and tend to sell quicker compared to non-staged properties.</p>\n<h4>2. Higher Property Value</h4>\n<p>A well-presented home creates a premium perception, often leading to better offers and increased sale prices.</p>\n<h4>3. Better Online Presence</h4>\n<p>High-quality staging improves property photos, making listings more appealing and clickable.</p>\n<h4>4. Emotional Connection</h4>\n<p>Buyers are more likely to connect with a home that feels welcoming and ready to live in.</p>\n<hr>\n<h3>Our Home Staging Services in Melbourne</h3>\n<p>At your staging service, a complete range of solutions is designed to suit different property types and budgets:</p>\n<h4>✔ Full Home Staging</h4>\n<p>Perfect for vacant properties, including complete furniture and d&eacute;cor setup.</p>\n<h4>✔ Partial Staging</h4>\n<p>Uses existing furniture with added styling elements for cost-effective transformation.</p>\n<h4>✔ Styling Consultation</h4>\n<p>Expert advice on how to prepare your home for sale with minimal investment.</p>\n<h4>✔ Furniture &amp; D&eacute;cor Rental</h4>\n<p>Access to modern, stylish furniture that enhances your property&rsquo;s appeal.</p>\n<hr>\n<h3>Factors That Affect Home Staging Cost</h3>\n<p>The cost of <strong>Home Staging Melbourne</strong> can vary depending on several factors:</p>\n<ul>\n<li>Property size and layout</li>\n<li>Number of rooms staged</li>\n<li>Type of furniture and d&eacute;cor used</li>\n<li>Duration of staging</li>\n<li>Level of styling required</li>\n</ul>\n<p>For example, smaller apartments generally cost less to stage, while larger homes require more resources and styling effort.</p>\n<hr>\n<h3>Tips to Maximize Your Home Staging Results</h3>\n<p>To get the best return on your investment, follow these simple tips:</p>\n<ul>\n<li>Focus on key areas like living room, kitchen, and master bedroom</li>\n<li>Keep d&eacute;cor minimal and neutral</li>\n<li>Remove personal items and clutter</li>\n<li>Ensure proper lighting in all rooms</li>\n<li>Use professional photography for listings</li>\n</ul>\n<p>These small changes can significantly improve your home&rsquo;s presentation and buyer appeal.</p>\n<hr>\n<h3>Why Home Staging is Worth the Investment</h3>\n<p>Investing in <strong>Home Staging Melbourne</strong> is one of the smartest decisions you can make when selling your property. It not only enhances visual appeal but also increases buyer engagement, reduces time on the market, and improves the chances of receiving competitive offers.</p>\n<p>A staged home creates a lifestyle experience, allowing buyers to imagine their future in the space&mdash;which is a powerful factor in decision-making.</p>\n<hr>\n<h3>Final Thoughts</h3>\n<p>In a fast-moving property market like Melbourne, first impressions matter more than ever. Professional <strong>Home Staging Melbourne</strong> services help you showcase your property&rsquo;s full potential, attract more buyers, and achieve the best possible sale price.</p>\n<p>Whether you are selling a small apartment or a luxury home, staging transforms your space into a desirable, market-ready property that stands out from the competition.</p>",
        "topics": [
            {
                "id": 4517,
                "name": "Home Staging Melbourne",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4518,
                "name": "Property Styling Melbourne",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166232,
            "forum_user": {
                "id": 165996,
                "user": 166232,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/dbf792ce91cb86c77000441bf0380e7c?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-31T07:42:06.510626+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "thestylecast",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "home-staging-melbourne-professional-property-styling-services",
        "pk": 4556,
        "published": false,
        "publish_date": "2026-03-31T07:47:29.101691+02:00"
    },
    {
        "title": "SPAT Devices -  Music Unit",
        "description": "Presented during the Ircam Forum Workshop 2023 In Paris",
        "content": "<p>Music Unit produces the <a href=\"https://www.ableton.com/en/packs/spat-bundle/\">SPAT</a> collection, <a href=\"https://www.ableton.com/en/live/max-for-live/\">Max For Live</a> plugins distributed by <a href=\"https://www.ableton.com/en/\">Ableton</a>.</p>\r\n<p>SPAT plugins make it possible to arrange and move sound sources in real or virtual auditory spaces, in 2D or 3D, thanks to advanced spatialization engines, based on the Spatialiseur processor developed at IRCAM for nearly three decades.</p>\r\n<p><img src=\"/media/uploads/6a7fac8a99cd6b475c4b13dc4c01c997.png\" alt=\"\" width=\"520\" height=\"520\" /></p>\r\n<p>The plugins are offered in two packs: SPAT Multichannel and SPAT Stereo.</p>\r\n<p>SPAT Multichannel is for artists, producers and sound engineers who want to get the most out of the multichannel setup of their studio or concert hall. It comes with all of the key peripherals from the Stereo version, but with additional functionality in each of the peripherals allowing up to 32 speakers to be programmed. It also includes a bonus tool, Speaker Editor, which lets you replicate your speaker setup in Live.</p>\r\n<p>SPAT Stereo is intended for those who have simple stereo configurations (speakers, headphones) and who still wish to integrate high-level spatialization techniques into their productions - whether it is to create sounds headphones or using the transaural panning algorithm with studio monitors. The bundle includes stereo versions of all the key devices that make SPAT powerful: Spatial, Room and Multiverb.</p>\r\n<p>SPAT Devices is developed by <a href=\"http://www.musicunit.fr/music-unit-en/manuel-poletti\">Manuel Poletti</a> from the <a href=\"http://www.musicunit.fr/musicunit-en\">Music Unit</a> studio, using the <a href=\"https://forum.ircam.fr/projects/detail/spat/\">SPAT Max library</a> developed by the <a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac\">Acoustic and Cognitive Spaces</a> - STMS team (Ircam, CNRS, Sorbonne University, French Ministry of Culture) and distributed by <a href=\"https://ircamamplify.com/en/\">Ircam Amplify</a>.</p>",
        "topics": [],
        "user": {
            "pk": 1021,
            "forum_user": {
                "id": 1021,
                "user": 1021,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PortraitMU_Martin_Antiphon.jpg",
                "avatar_url": "/media/cache/32/34/3234bcf828a4be0f8a1b4026963834e4.jpg",
                "biography": "Sound engineer, 3D audio designer, producer and composer, Martin Antiphon is leaving his position as sound manager at IRCAM in 2010 to join the Music Unit team. He already has numerous studio collaborations to his credit with Ibrahim Maalouf, Balake Sissoko, Rone or Vanessa Wagner, as well as concerts throughout Europe as a live electronic performer for Kaija Saariaho, Sivan Eldar and Sebastian Rivas. On the strength of his mastery of traditional mixing techniques and spatial audio technologies, Martin is now working on converging his skills in the field of immersive audio. He is currently CTO of Music Unit, within wich he has developed a patented 3D audio synthesiser. However Martin continues to create and recently inaugurated his first sound installation, Lo Parlament, in his home town of Pau.\nSince 2022, Martin is vice-president of the French section of the AES.",
                "date_modified": "2026-02-25T17:51:20.352692+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": true,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 486,
                        "forum_user": 1021,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "MartinAntiphon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "spat-devices-by-music-unit-1",
        "pk": 2068,
        "published": true,
        "publish_date": "2023-02-15T17:10:49+01:00"
    },
    {
        "title": "Ada's Song: Making machine-learning processes visible and tangible - Patricia ALESSANDRINI",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>Ada&rsquo;s Song is a ca. 10-minute work for mezzo-soprano, ensemble and an interactive Piano Machine system, commissioned as an hommage to Ada Lovelace in 2019. It was created using AI-assisted composition processes, and employs real-time machine learning in the performance of the Piano Machine. Designed at Goldsmiths College in collaboration with Konstantin Leonenko in 2017, the Piano Machine plays the strings of the piano directly through mechanical, sustained vibration created by a set of motors and finger-like appendages controlled by microprocessors, thus creating dynamic control of notes over time, piloted by wireless OSC messaging.&nbsp;The material performed by the Piano Machine was generated by a concatenation of recordings of a work by Henry Purcell, Hosanna to the Highest, such that the repetitive ground bass of the original creates a foundation for the expressive intervention of real-time machine-learning processes.<br />In an attempt to render the Piano Machine more expressive and responsive to the &lsquo;human&rsquo; musicians&rsquo; performance, the repeating harmonic patterns performed by the Piano Machine are shaped by machine learning processes that &lsquo;listen&rsquo; to the instrumentalists during the rehearsals and performance. These processes filter the reservoir of notes and amplitudes produced from the concatenated recordings, not only in relation the notes that been played, but how they have been performed. This is achieved by building up training sets of timbral data over the course of rehearsals. Thus, the Piano Machine inscribes itself into the expressive sonic world of the ensemble.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 4748,
            "forum_user": {
                "id": 4745,
                "user": 4748,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7cc93f830b5f7a4865e56ed58873dae1?s=120&d=retro",
                "biography": "Patricia Alessandrini is a composer/sound artist creating compositions, installations, and performance situations which are most often interactive and theatrical. Through these intermedial formats, she actively engages with the concert music repertoire, and issues of representation, interpretation, perception, and memory. \n\nHer works have been presented in the Americas, Asia, Australia, and over 15 European countries. She is also a performer and improvisor of live electronics. \n\nShe holds two PhDs, from Princeton University and the Sonic Arts Research Centre (SARC). She has taught Computer-Assisted Composition at the Accademia Musicale Pescarese, Composition with Technology at Bangor University, Sonic Arts at Goldsmiths College, and is Assistant Professor of Composition at Stanford University since 2018, where she performs research on embodied interaction, immersive audiovisual experience, and instrument design for inclusive performance at the Center for Computer Research in Music and Acoustics (CCRMA).\n\nHer works are published by Babelscores, and may be consulted at patriciaalessandrini.com and patriciaalessandrini.net",
                "date_modified": "2024-03-20T23:08:08.036921+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alessandrini",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "adas-song",
        "pk": 2085,
        "published": true,
        "publish_date": "2023-02-24T17:29:32+01:00"
    },
    {
        "title": "La symphonie moderne : exprimer et contenir les émotions - Jungwoo Kim, Jiyoon Kim et Yuri Cho",
        "description": "Ce projet est le fruit d'une collaboration avec Jungwoo Kim, Jiyoon Kim et Yuri Cho.",
        "content": "<p><span><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p><span></span></p>\r\n<p><span>Pr&eacute;sent&eacute; par :&nbsp;&nbsp;Jungwoo Kim, Jiyoon Kim, and Yuri Cho<br /><a href=\"https://forum.ircam.fr/profile/jw079101/\" title=\"Biographie Jungwoo Kim\">Biography&nbsp;<span>Jungwoo Kim<br /></span></a><a href=\"https://forum.ircam.fr/profile/jyj/\" title=\"Biography Yuri Cho\"><span>B</span>iography&nbsp;<span>Yuri Cho<br /></span></a><a href=\"https://forum.ircam.fr/profile/dawnowl99/\" title=\"Biography Jiyoon Kim\"><span>B</span>iography Jiyoon Kim</a></span></p>\r\n<p><span></span></p>\r\n<p>Nous, les humains, recherchons instinctivement la stimulation.</p>\r\n<p></p>\r\n<p>Pourtant, les sons de la routine quotidienne ne peuvent pas faire battre notre c&oelig;ur.</p>\r\n<p>Nous avons besoin de sons forts, qui se manifestent lorsque nous exprimons correctement nos &eacute;motions, par exemple en donnant un coup de poing, en lan&ccedil;ant ou en s'&eacute;lan&ccedil;ant.</p>\r\n<p>Cependant, si tous les membres de la soci&eacute;t&eacute; recherchent la stimulation, cela entra&icirc;ne des cons&eacute;quences terribles telles que la violence ou la criminalit&eacute;. En tant que citoyens modernes, nous devons parfois r&eacute;primer nos &eacute;motions et agir avec douceur pour participer &agrave; des activit&eacute;s sociales. Frapper devient taper, lancer devient dessiner, courir devient marcher.</p>\r\n<p>La vie moderne est un &eacute;quilibre entre l'expression et la suppression des &eacute;motions.</p>\r\n<p>Le bruit d'une tasse qui se brise, une frappe puissante, le bruit d'un papier qui se d&eacute;chire.</p>\r\n<p>Les actions qui expriment des &eacute;motions cr&eacute;ent un rythme fort qui secoue le corps.</p>\r\n<p>D'un autre c&ocirc;t&eacute;, le son d'une tasse qui bouge, des doigts qui tapotent, le doux bruit des pas.</p>\r\n<p>Les actions qui suppriment les &eacute;motions remplissent les intervalles entre les battements de c&oelig;ur.</p>\r\n<p>Nous voulons discuter de la mani&egrave;re dont la stimulation des citoyens modernes peut &ecirc;tre int&eacute;gr&eacute;e dans leur vie.</p>\r\n<p>En utilisant le son des choses et des lumi&egrave;res interactives, nous vous inviterons &agrave; la symphonie cr&eacute;&eacute;e par l'expression et la retenue des &eacute;motions.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55038,
            "forum_user": {
                "id": 54976,
                "user": 55038,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/874d166e0d67bc325dbea7229e54e4ba?s=120&d=retro",
                "biography": "Yuri Cho is a design practitioner and art director based in London and Seoul. With understanding about visual communication design, she studies Digital Direction in Royal College of Art to navigate future storytelling using novel technologies such as sonic and mixed reality. Her critical thinking skills enables each work to be in more creative way and developed from various design practice and campaigns for commercial purpose. Currently, she focuses on sound audio as a critical medium for her works to express the human's emotional expressions.",
                "date_modified": "2024-04-02T19:37:12.668892+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jyj",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2818,
                    "user": 55038,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "the-modern-symphony-expressing-and-restraining-emotions-1",
        "pk": 2818,
        "published": true,
        "publish_date": "2024-03-07T18:41:33+01:00"
    },
    {
        "title": "Ace the AWS Certified Generative AI Developer – Professional (AIP-C01) Exam",
        "description": "ExamOut delivers industry-aligned resources tailored specifically for the AWS Certified Generative AI Developer – Professional (AIP-C01) exam. ",
        "content": "<h2><strong>Ace the AWS Certified Generative AI Developer &ndash; Professional (AIP-C01) Exam with ExamOut</strong></h2>\n<p>Preparing for the Amazon Web Services AWS Certified Professional AIP-C01 Exam requires precision, real-world expertise, and access to reliable study resources. ExamOut is your trusted partner for achieving certification success on the first attempt. Our expertly curated preparation materials are designed to help professionals master Generative AI concepts on AWS and confidently pass the AIP-C01 Certification Exam.</p>\n<h2><strong>Why Choose ExamOut for AIP-C01 Exam Preparation?</strong></h2>\n<p>ExamOut delivers industry-aligned resources tailored specifically for the AWS Certified Generative AI Developer &ndash; Professional (AIP-C01) exam. Our content is developed by certified AWS experts who understand the exam blueprint and the evolving demands of AI-driven cloud solutions.</p>\n<p><strong>With ExamOut, you gain access to:</strong></p>\n<ul>\n<li>\n<p>AIP-C01 Exam Dumps crafted to reflect real exam scenarios</p>\n</li>\n<li>\n<p>Updated AIP-C01 Exam Questions and Answers</p>\n</li>\n<li>\n<p>Comprehensive AIP-C01 Exam Study Guide PDF Questions</p>\n</li>\n<li>\n<p>High-quality AIP-C01 Practice Test Exam Dumps</p>\n</li>\n</ul>\n<p>These resources are designed to strengthen your understanding of generative AI services, model deployment, security, governance, and optimization on AWS.</p>\n<p><strong>Achieving a HIGH Score &ndash; A Guide to Improve Your Skill in Your Exam:-&nbsp;<a href=\"https://www.examout.co/AIP-C01-exam.html\">https://www.examout.co/AIP-C01-exam.html</a></strong></p>\n<h2>Comprehensive AIP-C01 Dumps for Confident Exam Success</h2>\n<p>Our AIP-C01 Dumps PDF provide an efficient and structured approach to exam preparation. Each question is carefully reviewed for accuracy and relevance, ensuring alignment with the latest Amazon Web Services AWS Certified Professional AIP-C01 Exam objectives.</p>\n<p>The Amazon Web Services AWS Certified Professional AIP-C01 Certification Braindumps available on ExamOut are ideal for professionals who want focused preparation without wasting time on outdated or irrelevant content. You&rsquo;ll gain clarity, confidence, and exam-ready knowledge.</p>\n<h2><strong>Real Exam-Focused AIP-C01 Exam Questions</strong></h2>\n<p>ExamOut&rsquo;s AIP-C01 Exam Questions are scenario-based and designed to test practical skills required in real AWS environments. With detailed explanations included in our AIP-C01 Exam Questions and Answers, candidates can easily understand complex topics and close knowledge gaps.</p>\n<p>Whether you&rsquo;re revising key concepts or validating your readiness, our AIP-C01 Exam Dumps help you stay ahead of the competition.</p>\n<h2><strong>Exclusive Winter Sale &ndash; Save 65% on All AWS Certification Exams</strong></h2>\n<p>To make your certification journey even more accessible, ExamOut is offering an Exclusive Winter Sale 65% Discount Offer (Feb 2026) on All Amazon Web Services Certification Exams.</p>\n<p>🎯 <strong>Use Coupon Code: &ldquo;exc65&rdquo;</strong><br>✔ Valid for a limited time<br>✔ Applicable to all AWS certification exam preparation materials</p>\n<p>This is the perfect opportunity to secure premium AIP-C01 Exam Dumps, practice tests, and study guides at an unbeatable price.</p>\n<h2><strong>Start Your AIP-C01 Certification Journey with ExamOut Today</strong></h2>\n<p>The AWS Certified Generative AI Developer &ndash; Professional (AIP-C01) certification validates your advanced skills and boosts your professional credibility in the AI and cloud computing domain. With ExamOut, you don&rsquo;t just prepare&mdash;you prepare smart.</p>\n<p><strong>Choose ExamOut today and move one step closer to AWS certification success with confidence.&nbsp;<a href=\"https://www.examout.co\">https://www.examout.co</a></strong></p>",
        "topics": [],
        "user": {
            "pk": 159884,
            "forum_user": {
                "id": 159654,
                "user": 159884,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/51314ab3f022c5fea67c5d968cc2e504?s=120&d=retro",
                "biography": "Getting past Amazon Web Services ExamOut AIP-C01 Exam Questions Dumps (February 2026) make the AIP-C01 Certification Exam quicker and more dependable. ExamOut, created by AWS-certified experts, offers up-to-date AIP-C01 Exam Dumps, authentic AIP-C01 Exam Questions and Answers, and precise practice exams that correspond with the most recent exam objectives. These dumps give candidates a clear and confident understanding of AWS generative AI development. ExamOut offers dependable preparation that increases accuracy, confidence, and pass rates, regardless of your time constraints or goal of success on your first try. ExamOut can help you prepare more effectively, save time, and pass the AWS Certified Generative AI Developer – Professional (AIP-C01) test.",
                "date_modified": "2026-02-03T06:19:42.066170+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "awsexamdumps",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ace-the-aws-certified-generative-ai-developer-professional-aip-c01-exam",
        "pk": 4300,
        "published": false,
        "publish_date": "2026-02-03T06:23:51.247663+01:00"
    },
    {
        "title": "Moving With Time by Seyed Ali Hosseini & Giuseppe Messineo.",
        "description": "Moving with Time is an immersive audiovisual performance exploring the evolving relationship between sound, image, and culture through real-time interaction.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/989c3b15fc7697d497bd1f29e9aee2dd.png\" />&nbsp;</p>\r\n<p>The work examines how musical and technological transformations reflect broader shifts in human perception and identity, tracing a sonic and visual journey across four interconnected eras of expression:</p>\r\n<p><strong>1. Traditional Iranian Music</strong> &ndash; Rooted in acoustic performance using the <em>Setar</em>, representing cultural origin and organic sound.</p>\r\n<p><strong>2. Analog Synthesis</strong> &ndash; Featuring instruments such as the <strong>ARP Odyssey</strong> and <strong>Monotribe</strong>, embodying the warmth and imperfection of early electronic sound.</p>\r\n<p><strong>3. Digital Signal Processing</strong> &ndash; Extending and transforming live sound through <strong>granular</strong> and <strong>concatenative synthesis </strong>in <strong>Max/MSP</strong>.</p>\r\n<p><strong>4. Real-Time Visual Generation</strong> &ndash; Using <strong>TouchDesigner</strong> to translate live audio into dynamic, generative imagery, merging sound and vision into an unique and immersive experience.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a819be80bced88bec68a1da055055608.png\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/88fb1b129f399b53906b5ddc1ef94be4.png\" /></p>\r\n<p><em>Moving with Time</em> seeks to question how tradition adapts within digital environments and how contemporary tools can both distort and amplify cultural memory. By placing the <em>Setar</em>&mdash;a deeply</p>\r\n<p>symbolic Iranian instrument&mdash;within an electronic and visual framework, the work reimagines heritage as a living, evolving entity.</p>\r\n<p>S.Ali Hosseini, Giuseppe Messineo.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 4230,
                "name": "Analog Synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4229,
                "name": "Audiovisual Performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4231,
                "name": "Immersive Art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4232,
                "name": "Iranian Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 80198,
            "forum_user": {
                "id": 80111,
                "user": 80198,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b2446a55f2854515b8aba89d7952b56c?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-24T15:55:00.717854+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alihosseini2019",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "moving-with-time",
        "pk": 4357,
        "published": true,
        "publish_date": "2026-02-15T09:57:19+01:00"
    },
    {
        "title": "Somax for Live Tutorials",
        "description": "This page gathers video tutorials on Somax for Live",
        "content": "<h1><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f827bc96a599627385779240af517e07.png\"></strong></h1>\n<h1><strong>Tutorials</strong></h1>\n<h2><strong>Getting Started</strong></h2>\n<p>&lt;iframe title=\"YouTube video player\" src=\"https://www.youtube.com/embed/KbZ-adj8bSA?si=QhQgtoUCkJI3qSmD\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"&gt;&lt;/iframe&gt;</p>\n<h3>Credits</h3>\n<p>Somax2 (c) Ircam 2012-2026</p>\n<p>Somax2 is a totally renewed version of the Somax reactive co-improvisation paradigm born in the Music Representations Team at Ircam - STMS. It is part of the research projects ANR MERCI (Mixed Musical Reality with Creative Instruments) and <a href=\"https://reach.ircam.fr/\">ERC REACH</a> (Raising Co-creativity in Cyber-Human Musicianship) directed by G&eacute;rard Assayag.</p>\n<ul>\n<li><strong>Somax for Live development:</strong> Manuel Poletti in collaboration with Marco Fiorini</li>\n<li><strong>Somax 2 development &amp; documentation:</strong> Joakim Borg and Marco Fiorini</li>\n<li><strong>Somax creation:</strong> G&eacute;rard Assayag and Laurent Bonnasse-Gahot</li>\n<li><strong>Pre-version 2 &amp; adaptations:</strong> Axel Chemla Romeu Santos</li>\n<li><strong>Early Prototype:</strong> Olivier Delerue</li>\n</ul>\n<p>Thanks to Georges Bloch and Mikha&iuml;l Malt for their continuous expertise. Thanks to Bernard Borron, Bernard Magnien, Carine Bonnefoy, Jo&euml;lle L&eacute;andre, Fabrizio Cassol, Marco Fiorini, and Ana&iuml;s del Sordo for their musical material used in the distribution corpus.</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 207,
                "name": "Ableton",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2788,
                "name": "Improvisation, generativity and co-creative interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2184,
                "name": "RepMus",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4245,
                "name": "somax for live",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4244,
                "name": "somaxforlive",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "somax-for-live-tutorials",
        "pk": 4570,
        "published": false,
        "publish_date": "2026-04-01T11:04:33.897586+02:00"
    },
    {
        "title": "Audio Orchestrator for Installation and Performance",
        "description": "Using BBC's Audio Orchestrator it is possible to easily create multichannel environments over the internet, in which each channel is transmitted to the user's cellphone, tablet or desktop computer. The article examines various strategies for multichannel sound in which the composer does not know how many people are logged in at the same time, where they currently are in the room or if they are walking around, or in which direction they will point their cellphones.",
        "content": "<p><a href=\"https://tammen.org/Audio-Orchestrator-for-Installation-and-Performance\">https://tammen.org/Audio-Orchestrator-for-Installation-and-Performance</a></p>",
        "topics": [
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 16747,
            "forum_user": {
                "id": 16744,
                "user": 16747,
                "first_name": "Hans",
                "last_name": "Tammen",
                "avatar": "https://forum.ircam.fr/media/avatars/Hans_Tammen_joergsteinmetz-medium.jpg",
                "avatar_url": "/media/cache/3b/97/3b976219def8982b587bdd88a7e557a3.jpg",
                "biography": "Hans Tammen likes to set sounds in motion, and then sit back to watch the movements unfold. Using textures, timbre and dynamics as primary elements, his music is continuously shifting, with different layers floating into the foreground while others disappear. His music flows like clockwork, “transforming a sequence of instrumental gestures into a wide territory of semi-hostile discontinuity; percussive, droning, intricately colorful, or simply blowing your socks off” (Touching Extremes).\n\nHis works have been presented at festivals in the US, Canada, Mexico, Russia, Ukraine, India, South Africa, the Middle East and all over Europe. Hans Tammen received grants and composer commissions from NewMusicUSA,  Chamber Music America, MAPFund, Mid-Atlantic Arts Foundation, American Music Center, Lucas Artists Residencies Montalvo, New York State Council On The Arts (NYSCA), New York Foundation For The Arts (NYFA), American Composers Forum w/ Jerome Foundation, Foundation for Contemporary Arts Emergency Funds, New York State Music Fund, Goethe Institute w/ Foreign Affairs Office, among others.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "hanstammen",
            "first_name": "Hans",
            "last_name": "Tammen",
            "bookmarks": []
        },
        "slug": "audio-orchestrator-for-installation-and-performance",
        "pk": 1229,
        "published": false,
        "publish_date": "2022-08-06T19:46:35.272042+02:00"
    },
    {
        "title": "New from the EAC Research Team by Thibaut Carpentier",
        "description": "We will introduce the latest software developments from the EAC Research Team (Acoustics & Cognition)",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/db91ab337fc406aba44a77710ee5a71b.png\" width=\"951\" height=\"654\" /></p>\r\n<p></p>\r\n<p>Presented by Thibaut Carpentier</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/tcarpent/\" target=\"_blank\">Biography</a></p>\r\n<p>In this short presentation, we will introduce the latest software developments from the EAC Research Team (Acoustics &amp; Cognition).<br />In particular, we will introduce the latest releases of the <a href=\"https://forum.ircam.fr/projects/detail/spat/\" target=\"_blank\">Spat5 package for Max</a>, and the <a href=\"https://forum.ircam.fr/projects/detail/panoramix/\" target=\"_blank\">Panoramix standalone workstation</a>.<br />These releases include a number of new features, bug fixes, and other improvements.<br />These improvements concern all aspects of the toolbox: GUI objects, DSP components, command line tools, documentation and tutorials.</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 81,
                "name": "Panoramix",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 403,
                "name": "Reverberation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 92,
            "forum_user": {
                "id": 92,
                "user": 92,
                "first_name": "Thibaut",
                "last_name": "Carpentier",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5200b4214a3aff548eef81f9d804ae8b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-20T10:51:45.860663+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 446,
                        "forum_user": 92,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "tcarpent",
            "first_name": "Thibaut",
            "last_name": "Carpentier",
            "bookmarks": []
        },
        "slug": "new-from-the-eac-research-team-by-thibaut-carpentier",
        "pk": 3328,
        "published": true,
        "publish_date": "2025-03-06T10:29:39+01:00"
    },
    {
        "title": "News ASAP and Partiels software by Pierre Guillot",
        "description": "At this conference, Pierre Guillot will be presenting the latest additions to the Partiels software suite and the ASAP.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Pierre Guillot</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/guillot/\" target=\"_blank\">Biography</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">You will discover some of the key new features of Partiels, such as OSC support, navigation and editing modes, extra results, analysis plug-ins based on neural networks, etc</div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><a href=\"https://github.com/Ircam-Partiels/Partiels\" target=\"_blank\">Partiels Project</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><span>The ASAP plug-in collection arrives with major new versions of the Psycho Filter and Stretch Life plug-ins. These offer new user interfaces and experiences, as well as new sound transformation modes. You can discover these features on your computer or iPad! </span></div>\r\n<div class=\"c-content__button\"><span></span></div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/projects/detail/asap/\" target=\"_blank\">ASAP Project</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/capture_d&rsquo;écran&nbsp;._2025-03-06_à_14.29.03.jpeg\" alt=\"\" width=\"1171\" height=\"878\" /></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "news-asap-and-partiels-software-by-pierre-guillot",
        "pk": 3332,
        "published": true,
        "publish_date": "2025-03-06T16:38:05+01:00"
    },
    {
        "title": "The latest advances in audio generation by the ACIDS group of the Analysis Synthesis team by Philippe Esling, Nils Demerlé and Axel Chemla Romeu-Santos",
        "description": "This technical presentation showcases the latest advances in audio generation by the ACIDS group of the Analysis Synthesis team.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/image_(4).png\" alt=\"\" width=\"450\" height=\"450\" />&nbsp;&nbsp;<img src=\"https://forum.ircam.fr/media/uploads/image_(3).png\" alt=\"\" width=\"866\" height=\"448\" /><img src=\"/media/uploads/image.png\" alt=\"\" width=\"1832\" height=\"1116\" /></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Nils Demerl&eacute;, Philippe Esling, Axel Chemla Romeu-Santos</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/esling/\" target=\"_blank\">Biography Philippe Esling</a></div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/demerle/\" target=\"_blank\">Biography Nils Demerl&eacute;</a></div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/chemla/\" target=\"_blank\">Biography Axel Chemla Romeu-Santos</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<p>They will introduce the latest developments in NN-tilde and RAVE, the key points of the latest update, and a series of Max4Live devices. This presentation will also introduce AFTER, a new generation method based on real-time diffusion models with high-level controls, including timbre transfer and audio generation from MIDI and explicit control signals.</p>\r\n<p>The presentation will then discuss TorchBend, a new experimental library enabling real-time network bending and its integration with MaxMSP.</p>\r\n<p>The session concludes with future perspectives, particularly on model compression for embedded synthesis and new control modalities.</p>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 18124,
            "forum_user": {
                "id": 18118,
                "user": 18124,
                "first_name": "Philippe",
                "last_name": "Esling",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/be2d538326bb9054b1d3e9a1c856c61f?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-24T02:53:22.755457+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 229,
                        "forum_user": 18118,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-09",
                        "type": 0,
                        "keys": [
                            {
                                "id": 297,
                                "membership": 229
                            },
                            {
                                "id": 300,
                                "membership": 229
                            },
                            {
                                "id": 362,
                                "membership": 229
                            },
                            {
                                "id": 525,
                                "membership": 229
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "esling",
            "first_name": "Philippe",
            "last_name": "Esling",
            "bookmarks": []
        },
        "slug": "the-latest-advances-in-audio-generation-by-the-acids-group-of-the-analysis-synthesis-team-by-nils-demerle-philippe-esling-and-axel-chemla-romeu-santos",
        "pk": 3352,
        "published": true,
        "publish_date": "2025-03-18T11:02:47+01:00"
    },
    {
        "title": "Ma tribune",
        "description": "Description de ma tribune.",
        "content": "<p><strong>Le contenu de mon article</strong></p>",
        "topics": [],
        "user": {
            "pk": 4,
            "forum_user": {
                "id": 4,
                "user": 4,
                "first_name": "Raphael",
                "last_name": "Voyazopoulos",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d07e151d99d17f02f5c915341aa0f4da?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "voyazopoulos",
            "first_name": "Raphael",
            "last_name": "Voyazopoulos",
            "bookmarks": []
        },
        "slug": "ma-tribune",
        "pk": 213,
        "published": false,
        "publish_date": "2019-04-09T16:51:45+02:00"
    },
    {
        "title": "MyBeeKnows for Ableton Live par Music Unit - Martin Antiphon",
        "description": "Technologie de synthèse binaurale",
        "content": "<p><strong><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /></strong>Pr&eacute;sent&eacute; par : Martin Antiphon&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/MartinAntiphon/\">Biographie&nbsp;</a></p>\r\n<p><strong></strong></p>\r\n<p><strong>MyBeeKnows pour Ableton Live par Music Unit</strong></p>\r\n<p>MyBeeKnows est une technologie de synth&egrave;se binaurale qui utilise un ensemble universel de HRTF (Head-Related Transfer Functions) pour effectuer une spatialisation binaurale 3D transparente. MyBeeKnows peut traiter des signaux mono, st&eacute;r&eacute;o et multicanaux ainsi que des signaux ambisoniques 3D d'ordre &eacute;lev&eacute;.</p>\r\n<p>La principale qualit&eacute; de MyBeeKnows, outre son efficacit&eacute; en terme de temps de calcul, est la grande transparence du rendu audio, qui peut ainsi s'&eacute;couter aussi bien sur casque que sur enceintes.</p>\r\n<p>MyBeeKnows est d&eacute;velopp&eacute; par Music Unit, studio fran&ccedil;ais d'enregistrement, de cr&eacute;ation et de recherche ax&eacute; sur la qualit&eacute; de la restitution sonore. Il est co-d&eacute;velopp&eacute; avec l'Ecole Polytechnique de Paris et le Conservatoire National Sup&eacute;rieur de Musique et Danse de Paris.</p>\r\n<p>MyBeeKnows est d&eacute;sormais disponible pour Live, en tant que suite de p&eacute;riph&eacute;riques audio impl&eacute;ment&eacute;s dans Max For Live.</p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/fd1b94041c5ed5c8e66d3cbc7071e333.png\" /></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>\r\n<p></p>",
        "topics": [
            {
                "id": 207,
                "name": "Ableton",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 290,
                "name": "M4l",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1133,
                "name": "Max for live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1894,
                "name": "MusicUnit",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1021,
            "forum_user": {
                "id": 1021,
                "user": 1021,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PortraitMU_Martin_Antiphon.jpg",
                "avatar_url": "/media/cache/32/34/3234bcf828a4be0f8a1b4026963834e4.jpg",
                "biography": "Sound engineer, 3D audio designer, producer and composer, Martin Antiphon is leaving his position as sound manager at IRCAM in 2010 to join the Music Unit team. He already has numerous studio collaborations to his credit with Ibrahim Maalouf, Balake Sissoko, Rone or Vanessa Wagner, as well as concerts throughout Europe as a live electronic performer for Kaija Saariaho, Sivan Eldar and Sebastian Rivas. On the strength of his mastery of traditional mixing techniques and spatial audio technologies, Martin is now working on converging his skills in the field of immersive audio. He is currently CTO of Music Unit, within wich he has developed a patented 3D audio synthesiser. However Martin continues to create and recently inaugurated his first sound installation, Lo Parlament, in his home town of Pau.\nSince 2022, Martin is vice-president of the French section of the AES.",
                "date_modified": "2026-02-25T17:51:20.352692+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": true,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 486,
                        "forum_user": 1021,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "MartinAntiphon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "mybeeknows-for-ableton-live-by-music-unit",
        "pk": 2831,
        "published": true,
        "publish_date": "2024-03-13T17:17:50+01:00"
    },
    {
        "title": "Home Staging Melbourne – Complete Property Styling for Maximum Sale Value",
        "description": "Home Staging Melbourne services designed to elevate your property’s look and attract serious buyers. Improve presentation, reduce time on market, and sell at the best price.",
        "content": "<h3>Home Staging Melbourne &ndash; Elevate Your Property&rsquo;s Market Appeal</h3>\n<p>Selling a home in a competitive market requires more than just listing it online. <a href=\"https://thestylecast.com.au/home-staging-melbourne/\"><strong>Home Staging Melbourne</strong></a> is a powerful solution that transforms your property into a visually appealing and buyer-ready space. By enhancing layout, design, and presentation, staging helps your home stand out and attract serious buyers.</p>\n<p>A well-staged home not only looks better but also creates a strong emotional connection, making it easier for buyers to imagine themselves living there.</p>\n<hr>\n<h3>What is Home Staging?</h3>\n<p><strong>Home Staging Melbourne</strong> is the process of preparing a property for sale by improving its appearance through styling, furniture placement, and d&eacute;cor enhancement. The goal is to highlight the best features of the home while making spaces feel functional, spacious, and inviting.</p>\n<p>This includes:</p>\n<ul>\n<li>Decluttering and organizing spaces</li>\n<li>Rearranging or adding furniture</li>\n<li>Using neutral tones and modern d&eacute;cor</li>\n<li>Enhancing lighting and flow</li>\n</ul>\n<p>These techniques work together to create a polished and attractive environment for buyers.</p>\n<hr>\n<h3>Why Home Staging Melbourne is Essential</h3>\n<p>Melbourne&rsquo;s real estate market is highly competitive, and buyers often have multiple options. A professionally staged property stands out and leaves a lasting impression.</p>\n<h4>✔ Faster Selling Time</h4>\n<p>Staged homes attract more attention and typically sell quicker than unstaged properties.</p>\n<h4>✔ Higher Buyer Engagement</h4>\n<p>A clean and well-styled home keeps buyers interested during inspections.</p>\n<h4>✔ Increased Property Value</h4>\n<p>Professional presentation creates a premium feel, encouraging better offers.</p>\n<h4>✔ Better First Impression</h4>\n<p>Buyers often decide within seconds&mdash;staging ensures those seconds count.</p>\n<hr>\n<h3>Key Elements of Effective Home Staging</h3>\n<p>To achieve the best results, <strong>Home Staging Melbourne</strong> focuses on several core elements:</p>\n<h4>1. Space Optimization</h4>\n<p>Proper furniture placement improves room flow and makes spaces feel larger.</p>\n<h4>2. Neutral Styling</h4>\n<p>Simple and elegant d&eacute;cor appeals to a wider range of buyers.</p>\n<h4>3. Lighting Enhancement</h4>\n<p>Bright and well-lit spaces create a warm and welcoming atmosphere.</p>\n<h4>4. Modern Design Touches</h4>\n<p>Minimal accessories add style without overwhelming the space.</p>\n<p>These elements combine to create a balanced and attractive presentation.</p>\n<hr>\n<h3>Types of Home Staging Services</h3>\n<p>Different properties require different staging approaches. <strong>Home Staging Melbourne</strong> offers flexible solutions:</p>\n<h4>Full Home Staging</h4>\n<p>Ideal for empty homes, including complete furniture and d&eacute;cor setup.</p>\n<h4>Partial Staging</h4>\n<p>Uses existing furniture with added styling elements.</p>\n<h4>Consultation Services</h4>\n<p>Expert advice to help homeowners prepare their property.</p>\n<h4>Furniture Rental</h4>\n<p>Temporary use of stylish furniture to enhance presentation.</p>\n<p>Each option is designed to suit different budgets and property needs.</p>\n<hr>\n<h3>How Home Staging Impacts Buyers</h3>\n<p>Most buyers begin their search online, making presentation more important than ever. With <strong>Home Staging Melbourne</strong>, your property looks more appealing in photos and attracts more clicks.</p>\n<p>Staged homes feel:</p>\n<ul>\n<li>More spacious</li>\n<li>More functional</li>\n<li>More modern</li>\n</ul>\n<p>This improves buyer perception and increases the chances of receiving offers.</p>\n<hr>\n<h3>Cost vs Value of Home Staging</h3>\n<p>While staging requires an investment, it often delivers strong returns. According to industry insights, staged homes can sell faster and sometimes achieve higher prices due to improved presentation and buyer appeal.</p>\n<p>The value comes from:</p>\n<ul>\n<li>Reduced time on market</li>\n<li>Increased buyer interest</li>\n<li>Better selling price</li>\n</ul>\n<hr>\n<h3>Simple Tips for Better Home Staging</h3>\n<p>To maximize results from <strong>Home Staging Melbourne</strong>, follow these tips:</p>\n<ul>\n<li>Focus on key areas like living room and bedrooms</li>\n<li>Remove personal items and clutter</li>\n<li>Use neutral colors and minimal d&eacute;cor</li>\n<li>Improve lighting in all spaces</li>\n<li>Keep the property clean and fresh</li>\n</ul>\n<p>These small changes can make a big difference in presentation.</p>\n<hr>\n<h3>Why Choose Professional Home Staging Melbourne?</h3>\n<p>Professional staging experts understand what buyers are looking for and how to present your property effectively. Their experience ensures your home is styled according to current trends and market expectations.</p>\n<p>With <strong>Home Staging Melbourne</strong>, you get:</p>\n<ul>\n<li>Expert design strategies</li>\n<li>Access to modern furniture and d&eacute;cor</li>\n<li>Market-focused styling</li>\n<li>Maximum visual impact</li>\n</ul>\n<hr>\n<h3>Final Thoughts</h3>\n<p>In today&rsquo;s property market, presentation is everything. <strong>Home Staging Melbourne</strong> helps you showcase your property in the best possible way, attract more buyers, and achieve faster, more profitable sales.</p>\n<p>Whether you&rsquo;re selling a small apartment or a large home, staging is a smart investment that delivers real results.</p>",
        "topics": [
            {
                "id": 4519,
                "name": "House Staging Experts",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4518,
                "name": "Property Styling Melbourne",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166232,
            "forum_user": {
                "id": 165996,
                "user": 166232,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/dbf792ce91cb86c77000441bf0380e7c?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-31T07:42:06.510626+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "thestylecast",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "home-staging-melbourne-boost-property-value-with-expert-styling",
        "pk": 4557,
        "published": false,
        "publish_date": "2026-03-31T08:13:10.803411+02:00"
    },
    {
        "title": "Panoramix - 3D Audio Tutoriel",
        "description": "Tutoriel 3D Audio (Panoramix) with Markus Noisternig (EAC) @ Spatial Audio Summer Seminar 2018, EMPAC, NY, USA",
        "content": "<p><span class=\"style-scope yt-formatted-string\" dir=\"auto\">Panoramix is a standalone application dedicated to spatial audio mixing and post-production. This tool offers a comprehensive environment for mixing, reverberating, and spatializing sound materials from different microphone systems: surround microphone trees, spot microphones, ambient miking, Higher Order Ambisonics capture. Several 3-D spatialization techniques (VBAP, HOA, binaural) can be combined and mixed simultaneously in different formats. Panoramix also provides conventional features of mixing engines (equalizer, compressor/expander, grouping parameters, routing of input/output signals, etc.), and it can be controlled entirely via the Open Sound Control protocol. The software can also be used to control the diffusion of sound for spatialized live events.&nbsp;</span></p>\r\n<p><a href=\"/projects/detail/panoramix/\"><span class=\"style-scope yt-formatted-string\" dir=\"auto\">https://forum.ircam.fr/projects/detail/panoramix/</span></a></p>\r\n<p><span class=\"style-scope yt-formatted-string\" dir=\"auto\">Hosted by EMPAC at Rensselaer along with IRCAM (the Paris-based Institut de Recherche et Coordination Acoustique/Musique), and CCRMA (Stanford University Center for Computer Research in Music and Acoustics), this workshop gave participants the opportunity to experience large-scale, complex audio setups in pristine acoustic environments. </span></p>\r\n<p><a href=\"https://empac.rpi.edu/events/2018/sass-2018?q=events%2F2018%2Fsass-2018\">https://empac.rpi.edu/events/2018/sass-2018?q=events%2F2018%2Fsass-2018</a></p>\r\n<p><iframe width=\"560\" height=\"314\" src=\"//www.youtube.com/embed/4zRh7XHC378\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p>&nbsp;</p>\r\n<p><iframe width=\"560\" height=\"314\" src=\"//www.youtube.com/embed/sFcNlF1TNVw\" allowfullscreen=\"allowfullscreen\"></iframe></p>",
        "topics": [
            {
                "id": 621,
                "name": "3daudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 43,
                "name": "EAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 81,
                "name": "Panoramix",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "tutoriel-panoramix-3d-audio",
        "pk": 1041,
        "published": true,
        "publish_date": "2022-01-21T16:42:49+01:00"
    },
    {
        "title": "Experiments in Designing Musical Experiences for Learning with Audiences",
        "description": "This article shares ",
        "content": "<p>This is placeholder text for the article I am writing to share here.</p>",
        "topics": [],
        "user": {
            "pk": 39,
            "forum_user": {
                "id": 39,
                "user": 39,
                "first_name": "S. Alex",
                "last_name": "Ruthmann",
                "avatar": "https://forum.ircam.fr/media/avatars/alexruthmann_portrait_square_0_1.png",
                "avatar_url": "/media/cache/7e/bf/7ebf2cb69693475cb8c6bb27b234fc62.jpg",
                "biography": "S. Alex Ruthmann is Area Head and Associate Professor of Interactive Media and Business at NYU Shanghai and Associated Professor of Music Education and Music Technology at NYU Steinhardt. He is the Founder/Director of the NYU Music Experience Design Lab (MusEDLab), and core faculty in the Music and Audio Research Lab (MARL). The MusEDLab creative learning and software projects are in active use by over 6.5 million people across the world.\n\nRuthmann recently launched a new research lab focused on sustainable entrepreneurship practices in classical music training programs in collaboration with the New World Symphony. This work is funded by a recent 5-year award from the National Endowment of the Arts. Ruthmann's research portfolio also includes a Norwegian project DigiSus, a participatory design research project focused on the design and development of interactive arts spaces infused with non-screen-based digital technologies for creative play. \n\nRuthmann currently serves as Co-Editor of the International Journal of Music Education and is co-author of the book Scratch Music Projects, an introduction to creative music coding projects in MIT's Scratch programming language for kids.",
                "date_modified": "2024-10-08T11:26:37.742325+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alexruthmann",
            "first_name": "S. Alex",
            "last_name": "Ruthmann",
            "bookmarks": []
        },
        "slug": "designing-musical-experiences-with-audiences",
        "pk": 401,
        "published": false,
        "publish_date": "2019-12-30T15:55:43.919013+01:00"
    },
    {
        "title": "Workshop ASAP & Partiels by Pierre Guillot",
        "description": "Through practice and concrete examples, participants will know how to use the ASAP tools for transforming sound: cross-synthesis, pitch transposition, time stretching, spectral filtering and spectral remix. He will present the functionalities offered by the ASAP collection, and in particular the plug-ins based on ARA2 technology.",
        "content": "<p style=\"font-weight: 400;\">The Psycho Filter plug-in lets you draw shape filters on the sound spectrogram and control their gain and fade. The sound representation and user interface enable you to create highly complex and precise surface filters to reduce or enhance specific parts of the sound's spectral components, to compensate for annoying artifacts in the sound, to isolate certain specificities of the sound and to creatively transform the sound. The Pitches Brew plugin lets you transpose the pitch and formant of sounds by drawing and modifying their frequency curves. Beyond the exceptional quality of the processing, the plugin offers a visual representation of the original fundamental frequencies, expected pitches, and formants with curves enabling numerous original edits such as redrawing, transposing, stretching, copying, etc.</p>",
        "topics": [],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "workshop-asap-partiels-by-pierre-guillot",
        "pk": 3073,
        "published": true,
        "publish_date": "2024-10-24T17:07:56+02:00"
    },
    {
        "title": "How to DJ in Spatial Audio by Deep Space",
        "description": "In this session, you'll learn everything you need to know about the art of DJing in Spatial Audio.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img src=\"/media/uploads/spatial_audio_with_ableton_live.png\" alt=\"\" width=\"812\" height=\"525\" /></p>\r\n<p>Presented by : Axel Delafosse</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/deepspace/\" target=\"_blank\">Biography</a></p>\r\n<p>Let's explore two methods for DJing in Spatial Audio:<br />&nbsp; 1. using Pro Tools to spatialize a stereo DJ set in real-time<br />&nbsp; 2. using Ableton Live as a multichannel mixer<br /><br />The first method enables any DJ to spatialize his DJ set, even if he's playing with vinyl records. It's a great way to make it more accessible while giving some control to the DJs, enabling them to move the vocals across the room and playing with some presets.<br /><br />The second method is more precise but more complex. It requires a lot of preparation and we are going to dig into some of the details: getting the multichannel files ready for DJing, routing with a Max 4 Live device, spatializing with multiple object panner plug-ins, and more...<br /><br />We are going to integrate with multiple spatialization processors, such as the Dolby Atmos Renderer, Flux:: and IRCAM Spat Revolution, L-Acoustics L-ISA, and d&amp;b Soundscape.<br /><br />Finally, you&rsquo;ll discover how a simple mindset shift can allow you to use the same immersive mix for both live performances and distribution on streaming platforms.</p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 670,
                "name": "Deep learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2464,
                "name": "DJing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2463,
                "name": "Dolby Atmos",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2585,
                "name": "Pro Tools",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 61896,
            "forum_user": {
                "id": 61829,
                "user": 61896,
                "first_name": "Deep",
                "last_name": "Space",
                "avatar": "https://forum.ircam.fr/media/avatars/deep-space.png",
                "avatar_url": "/media/cache/95/52/95523b1de1fa11c3f2ac04cb9fd4eaa3.jpg",
                "biography": "Deep Space is a French DJ and immersive live mixing engineer.\n\nOver the past year, he developed a method to use Ableton Live for DJing in Spatial Audio. And he just made a breakthrough enabling him to spatialize any DJ set in real-time.\n\nHe's launching Sweet Spot: a series of immersive events featuring a Dolby Atmos sound system using L-Acoustics L-ISA. He wants to build a community of like-minded DJs and music producers, which is why he’s sharing his expertise at the Ircam Forum.",
                "date_modified": "2025-08-02T18:29:04.185932+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "deepspace",
            "first_name": "Deep",
            "last_name": "Space",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2917,
                    "user": 61896,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3246,
                    "user": 61896,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 61896,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "how-to-dj-in-spatial-audio-by-deep-space",
        "pk": 3246,
        "published": true,
        "publish_date": "2025-02-03T16:41:40+01:00"
    },
    {
        "title": "professional-home-staging – Expert Property Styling for Faster Sales",
        "description": "Discover professional-home-staging services to enhance your property’s appeal. Attract buyers, sell faster, and increase value with expert styling and modern presentation techniques.",
        "content": "<h3>professional-home-staging &ndash; The Smart Way to Prepare Your Property for Sale</h3>\n<p>In today&rsquo;s competitive property market, presentation is everything. <a href=\"https://thestylecast.com.au/professional-home-staging\"><strong>professional-home-staging</strong> </a>is a powerful strategy that transforms your home into a visually appealing and market-ready space. It goes beyond simple decoration&mdash;focusing on creating a lifestyle that buyers can instantly connect with.</p>\n<p>A professionally staged home allows potential buyers to imagine themselves living in the space, which plays a crucial role in influencing their purchasing decisions.</p>\n<hr>\n<h3>What is professional-home-staging?</h3>\n<p><strong>professional-home-staging</strong> is the process of preparing a property for sale by enhancing its appearance through furniture arrangement, d&eacute;cor styling, and space optimization. The aim is to highlight the best features of the home while minimizing any drawbacks.</p>\n<p>This process includes:</p>\n<ul>\n<li>Strategic furniture placement</li>\n<li>Decluttering and organization</li>\n<li>Use of neutral colors and modern d&eacute;cor</li>\n<li>Enhancing lighting and ambiance</li>\n</ul>\n<p>These elements work together to create a clean, inviting, and attractive environment for buyers.</p>\n<hr>\n<h3>Why professional-home-staging is Important</h3>\n<p>In a market where buyers have multiple options, standing out is essential. <strong>professional-home-staging</strong> helps your property make a strong first impression both online and during inspections.</p>\n<h4>✔ Increased Buyer Interest</h4>\n<p>Staged homes attract more attention and generate higher engagement.</p>\n<h4>✔ Faster Sales</h4>\n<p>Well-presented homes tend to sell quicker compared to unstaged properties.</p>\n<h4>✔ Higher Selling Price</h4>\n<p>A beautifully styled home creates a premium feel, encouraging better offers.</p>\n<h4>✔ Better Online Listings</h4>\n<p>Professional staging improves property photos, making listings more appealing to potential buyers.</p>\n<hr>\n<h3>Key Features of professional-home-staging</h3>\n<p>To achieve the best results, <strong>professional-home-staging</strong> focuses on several important aspects:</p>\n<h4>1. Space Optimization</h4>\n<p>Proper furniture arrangement makes rooms appear larger and more functional.</p>\n<h4>2. Neutral Styling</h4>\n<p>Using simple and neutral tones appeals to a wider audience.</p>\n<h4>3. Lighting Enhancement</h4>\n<p>Good lighting creates a warm and welcoming atmosphere.</p>\n<h4>4. Modern D&eacute;cor</h4>\n<p>Stylish accessories add personality without overwhelming the space.</p>\n<hr>\n<h3>Types of professional-home-staging Services</h3>\n<p>Different properties require different staging approaches. <strong>professional-home-staging</strong> services are flexible and can be tailored to your needs:</p>\n<ul>\n<li><strong>Full Staging</strong> &ndash; Complete setup for vacant homes with furniture and d&eacute;cor</li>\n<li><strong>Partial Staging</strong> &ndash; Enhancing existing furniture with styling elements</li>\n<li><strong>Consultation Services</strong> &ndash; Expert advice for preparing your home</li>\n<li><strong>Furniture Rental</strong> &ndash; Temporary use of modern furniture for staging</li>\n</ul>\n<p>These options allow homeowners to choose a solution that fits their budget and property type.</p>\n<hr>\n<h3>How professional-home-staging Impacts Buyer Psychology</h3>\n<p>One of the biggest advantages of <strong>professional-home-staging</strong> is its impact on buyer psychology. A well-staged home creates an emotional connection, making buyers feel comfortable and inspired.</p>\n<p>When buyers walk into a staged property, they:</p>\n<ul>\n<li>Visualize their future lifestyle</li>\n<li>Feel more confident about the purchase</li>\n<li>Are more likely to make an offer</li>\n</ul>\n<p>This emotional engagement is a key factor in successful property sales.</p>\n<hr>\n<h3>Cost vs Value of professional-home-staging</h3>\n<p>While <strong>professional-home-staging</strong> requires an initial investment, it often delivers a strong return. The cost depends on factors like property size, furniture quality, and duration of staging.</p>\n<p>However, the benefits usually outweigh the cost:</p>\n<ul>\n<li>Faster transactions</li>\n<li>Reduced time on market</li>\n<li>Increased property value</li>\n</ul>\n<p>Investing in staging is a smart decision for sellers who want maximum results.</p>\n<hr>\n<h3>Tips for Effective professional-home-staging</h3>\n<p>To get the best outcome, follow these simple tips:</p>\n<ul>\n<li>Focus on key areas like the living room and master bedroom</li>\n<li>Keep spaces clean and clutter-free</li>\n<li>Use minimal and modern d&eacute;cor</li>\n<li>Ensure proper lighting in all rooms</li>\n<li>Highlight the property&rsquo;s best features</li>\n</ul>\n<p>These strategies can significantly improve your home&rsquo;s presentation.</p>\n<hr>\n<h3>Why Choose professional-home-staging?</h3>\n<p>Choosing <strong>professional-home-staging</strong> ensures that your property is presented in line with current market trends and buyer expectations. Experts bring experience, creativity, and strategic planning to make your home stand out.</p>\n<p>Professional staging offers:</p>\n<ul>\n<li>Expert design and styling</li>\n<li>Access to premium furniture and d&eacute;cor</li>\n<li>Market-focused presentation</li>\n<li>Maximum visual impact</li>\n</ul>\n<hr>\n<h3>Final Thoughts</h3>\n<p>In a competitive real estate market, <strong>professional-home-staging</strong> is no longer optional&mdash;it is essential. It transforms your property into a desirable, move-in-ready space that attracts buyers and maximizes value.</p>\n<p>Whether you are selling a small apartment or a large family home, professional staging helps you achieve faster sales and better results.</p>",
        "topics": [
            {
                "id": 4520,
                "name": "home staging services",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4521,
                "name": "property styling experts",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166232,
            "forum_user": {
                "id": 165996,
                "user": 166232,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/dbf792ce91cb86c77000441bf0380e7c?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-31T07:42:06.510626+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "thestylecast",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "professional-home-staging-expert-property-styling-for-faster-sales",
        "pk": 4558,
        "published": false,
        "publish_date": "2026-03-31T08:24:42.170312+02:00"
    },
    {
        "title": "house-staging-melbourne-cost – Complete Pricing Guide for Property Styling",
        "description": "Understand house-staging-melbourne-cost with this complete guide. Learn pricing, factors, and tips to maximize value while preparing your home for a successful sale.",
        "content": "<h3>house-staging-melbourne-cost &ndash; A Complete Guide for Property Sellers</h3>\n<p>When preparing your property for sale, one of the most common questions sellers ask is about <strong>house-staging-melbourne-cost</strong>. Understanding the cost of staging helps you plan your budget effectively while ensuring your home is presented in the best possible way.</p>\n<p>Home staging is not just an expense&mdash;it is a strategic investment that can improve buyer interest, reduce time on the market, and increase the final sale price.</p>\n<hr>\n<h3>What is Included in house-staging-melbourne-cost?</h3>\n<p><strong>house-staging-melbourne-cost</strong> typically includes a combination of services designed to enhance your property&rsquo;s appeal:</p>\n<ul>\n<li>Furniture and d&eacute;cor rental</li>\n<li>Interior styling and design planning</li>\n<li>Delivery, setup, and removal</li>\n<li>Consultation and space planning</li>\n<li>Accessory styling (artwork, rugs, lighting, etc.)</li>\n</ul>\n<p>These elements work together to create a modern, inviting, and buyer-ready environment.</p>\n<hr>\n<h3>Average house-staging-melbourne-cost in Melbourne</h3>\n<p>The cost of staging varies depending on property size and requirements. On average:</p>\n<ul>\n<li>1&ndash;2 bedroom properties: $800 &ndash; $1,500</li>\n<li>3 bedroom homes: $1,500 &ndash; $3,000</li>\n<li>4+ bedroom homes: $3,000 &ndash; $6,000+</li>\n</ul>\n<p>Other industry estimates suggest:</p>\n<ul>\n<li>Small homes: around $1,500 &ndash; $2,500</li>\n<li>Medium homes: $2,500 &ndash; $4,000</li>\n<li>Large homes: $4,000 &ndash; $7,500+</li>\n</ul>\n<p>These ranges depend on the level of staging, furniture quality, and duration of service.</p>\n<hr>\n<h3>Key Factors That Affect house-staging-melbourne-cost</h3>\n<p>Several important factors influence the final <strong>house-staging-melbourne-cost</strong>:</p>\n<h4>1. Property Size</h4>\n<p>Larger homes require more furniture, d&eacute;cor, and effort, increasing overall costs. Smaller apartments are usually more affordable to stage.</p>\n<h4>2. Type of Staging</h4>\n<ul>\n<li>Vacant homes require full furniture setup (higher cost)</li>\n<li>Occupied homes may only need partial styling (lower cost)</li>\n</ul>\n<h4>3. Duration of Staging</h4>\n<p>Most staging companies charge based on how long the property is staged. Longer durations increase total costs.</p>\n<h4>4. Furniture Quality</h4>\n<p>Luxury or designer furniture increases costs, while minimalist styling is more budget-friendly.</p>\n<h4>5. Location</h4>\n<p>Costs may vary depending on suburb, demand, and accessibility of the property.</p>\n<hr>\n<h3>Additional Costs to Consider</h3>\n<p>Apart from basic staging, some additional services may affect <strong>house-staging-melbourne-cost</strong>:</p>\n<ul>\n<li>Painting and minor repairs</li>\n<li>Landscaping and exterior styling</li>\n<li>Deep cleaning services</li>\n<li>Premium d&eacute;cor upgrades</li>\n</ul>\n<p>These extras can enhance presentation but may increase the total investment.</p>\n<hr>\n<h3>Is house-staging-melbourne-cost Worth It?</h3>\n<p>Many sellers wonder if staging is worth the cost. The answer is yes&mdash;when done correctly, staging delivers strong returns.</p>\n<p>Benefits include:</p>\n<ul>\n<li>Faster property sales</li>\n<li>Increased buyer interest</li>\n<li>Higher perceived property value</li>\n<li>Better online listing performance</li>\n</ul>\n<p>A well-staged home creates a strong first impression and attracts more competitive offers.</p>\n<hr>\n<h3>How to Reduce house-staging-melbourne-cost</h3>\n<p>If you want to manage your budget effectively, consider these tips:</p>\n<ul>\n<li>Use existing furniture for partial staging</li>\n<li>Focus on key areas like living room and bedrooms</li>\n<li>Choose simple and neutral d&eacute;cor</li>\n<li>Compare multiple staging providers</li>\n<li>Opt for short-term staging if expecting a quick sale</li>\n</ul>\n<p>These strategies help you get the best value without overspending.</p>\n<hr>\n<h3>Cost vs Return on Investment</h3>\n<p>Although <strong>house-staging-melbourne-cost</strong> may seem like an upfront expense, it often leads to:</p>\n<ul>\n<li>Reduced time on market</li>\n<li>Lower holding costs</li>\n<li>Increased sale price</li>\n</ul>\n<p>In many cases, the return on investment outweighs the initial staging cost, making it a smart decision for property sellers.</p>\n<hr>\n<h3>Final Thoughts</h3>\n<p>Understanding <strong>house-staging-melbourne-cost</strong> is essential for anyone planning to sell their property in Melbourne. By investing in professional staging, you can significantly improve your home&rsquo;s presentation, attract more buyers, and achieve better results.</p>\n<p>Whether you choose full staging or a simple styling upgrade, the right approach can make a big difference in your property&rsquo;s success.</p>",
        "topics": [
            {
                "id": 4522,
                "name": "house-staging-melbourne-cost",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4523,
                "name": "property styling cost Melbourne",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166232,
            "forum_user": {
                "id": 165996,
                "user": 166232,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/dbf792ce91cb86c77000441bf0380e7c?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-31T07:42:06.510626+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "thestylecast",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "house-staging-melbourne-cost-complete-pricing-guide-for-property-styling",
        "pk": 4559,
        "published": false,
        "publish_date": "2026-03-31T08:36:03.666400+02:00"
    },
    {
        "title": "Wavespace: A Highly Explorable Wavetable Generator by Hazounne Lee",
        "description": "Wavespace is a novel wavetable synthesis framework that generates wavetables by exploring a designed latent space. We factorize the latent space into groups of timbres, referred to as styles, and also condition the waveform with descriptors to manipulate spectral features. This method assists in wavetable creation by allowing smooth transitions in the waveform as the parameters are tweaked. We demonstrate it as a synthesizer that can function either as a standalone tool or within digital audio workstations (DAWs).",
        "content": "<h3><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/eb77957aff3df147dff8002732b91388.png\" /><br />Introduction</h3>\r\n<p><strong>Wavespace</strong> is a wavetable synthesis framework which enhances user control over timbre by exploring a high-dimensional space. As a single point in the&nbsp; space corresponds to one waveform, it generates wavetables with waveforms in smooth transition.</p>\r\n<p>Wavespace offers adjustable timbral parameters, including style and descriptor. <em>Style</em> is a set of waveforms that coherently represent a particular abstract timbre, such as a bottle blow or growl, and the <em>descriptor</em> is an auxiliary timbral feature such as brightness, richness, or fullness. These two are distinct concepts, as style offers an abstract concept of a timbre while the descriptor is applied thereafter for detailed timbral controls within a specific style setting.</p>\r\n<p style=\"text-align: center;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/ecc4e5e85f5978c9b1223395df5c7abd.png\" /></p>\r\n<div style=\"text-align: center;\">Figure 1 | A framework design of Wavespace. Each subspace is indicated by colors, reflecting each style.</div>\r\n<p>&nbsp;</p>\r\n<p>As shown in Figure 1, The latent space is factorized into several unique 2D style subspaces. Waveform is encoded to these subspaces, and the latent point in each subspace represents a proportion of each style. To obtain a desired wavetable, users can start from an initial setting or by encoding a waveform, then adjust parameters to transform the output waveform. This morphing is what we call <strong>space exploration</strong>.</p>\r\n<p>For a precise example, to add the style assigned to W<sub>1</sub>, users can adjust parameters from the unconditioned area to the conditioned area positioned at each corner of the subspace W<sub>1.</sub> While crossing through these two areas in subspace W<sub>2</sub> produces slightly different waveforms in the same level of style assigned to W<sub>2</sub>. Once style settings are complete, increasing the parameter in descriptor W<sub>brightness</sub> adds more brightness as a spectral feature. The number of conditioned areas and each of their position can be more than one, based on customized settings. The smooth morphing allows grained parametric controls as well as the existing wavetable's interpolation feature.</p>\r\n<h3>Technical Details</h3>\r\n<p>Wavespace's framework is technically achieved by learning the latents variationally based on the conditional variational autoencoder. Readers can find detailed information from our paper <a href=\"https://arxiv.org/abs/2407.19862\">here</a>.</p>\r\n<p>Readers can also access the implementation on <a href=\"https://github.com/hazounne/wavespace/\">Github</a>.</p>\r\n<h3>Synthesizer</h3>\r\n<p>Readers can try the synthesizer based on Wavespace; the download link is provided <a href=\"https://github.com/kimgihong2510/WavespaceImplementation\">here</a>.</p>",
        "topics": [
            {
                "id": 1774,
                "name": "neural synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2264,
                "name": "Plug-in",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 106,
                "name": "Software",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1779,
                "name": "Synthesizer",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2252,
                "name": "wavetable synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 85258,
            "forum_user": {
                "id": 85157,
                "user": 85258,
                "first_name": "Hazounne",
                "last_name": "Lee",
                "avatar": "https://forum.ircam.fr/media/avatars/Author_Image_Hazounne_Lee.JPG",
                "avatar_url": "/media/cache/c9/83/c983d8c78714669ac513b010bc69b7c6.jpg",
                "biography": null,
                "date_modified": "2025-02-13T08:41:54.099990+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 954,
                        "forum_user": 85157,
                        "date_start": "2024-10-07",
                        "date_end": "2025-10-07",
                        "type": 0,
                        "keys": [
                            {
                                "id": 649,
                                "membership": 954
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "hazounne",
            "first_name": "Hazounne",
            "last_name": "Lee",
            "bookmarks": []
        },
        "slug": "wavespace-a-highly-explorable-wavetable-generator-1",
        "pk": 3024,
        "published": true,
        "publish_date": "2024-10-11T09:35:09+02:00"
    },
    {
        "title": "Tutoriel Modalys n°7 : Diamonds Bow Forever",
        "description": "Septième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p style=\"text-align: justify;\"><strong>Ce tutoriel porte sur la fa&ccedil;on d'incliner une assiette.</strong></p>\r\n<p style=\"text-align: justify;\">Habituellement, le fait de jouer de l'archet est toujours associ&eacute; &agrave; l'un instrument &agrave; cordes. Cependant, dans Modalys, vous pouvez jouer &agrave; peu pr&egrave;s de tous les instruments que vous voulez. J'ai donc pens&eacute; qu'il serait bon de faire un tutoriel o&ugrave; l'on jouerait sur une assiette en diamant... Enfin, soyons honn&ecirc;tes... Qui a une assiette en diamant chez soi pour en jouer ?</p>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/eINliLmR_e0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: left;\"><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.&nbsp;</strong></p>",
        "topics": [
            {
                "id": 461,
                "name": "Bow",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 133,
                "name": "Sound synthesis and treatment",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n7-diamonds-bow-forever",
        "pk": 729,
        "published": true,
        "publish_date": "2020-10-13T10:02:01+02:00"
    },
    {
        "title": "Max Workshop: Boulez Remake",
        "description": "A workshop by Grégoire Lorieux, 26 Sept. 2025, Liepaja (Latvia)",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p>How would Pierre Boulez compose with today&rsquo;s tools? This creative workshop revisits his musical language using Max to reconstruct, manipulate, and transform motifs taken in <em>Anth&egrave;mes 2 </em>(1997), a piece for violin and electronics. Participants will develop interactive patches to experiment with live sound transformation, algorithmic processes, and dynamic control structures. The workshop fosters a creative dialogue between Boulez's modernist approaches and contemporary technological practices.</p>\r\n<p></p>\r\n<div><span>REQUIREMENTS for PARTICIPANTS :</span></div>\r\n<p><span>- a recent computer (Mac or Windows). Airdrop allowed for Mac users, a USB stick for Windows users.</span></p>\r\n<p><span>- Max 8 or 9 installed and authorized + antescofo library installed from Ircam Forum website.&nbsp;&raquo;</span></p>\r\n<p></p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/photo_4_-_pierre_boulez,_concert_scolaire,_1983_&copy;_b._meyer.jpg\" alt=\"Pierre Boulez, concert scolaire, 1983 &copy; B. Meyer\" width=\"1200\" height=\"368\" /></p>\r\n<p><sub><span>Pierre Boulez, concert scolaire, 1983 &copy; B. Meyer</span></sub></p>\r\n<p><sub><span></span></sub></p>\r\n<p><sub><span><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></span></sub></p>",
        "topics": [],
        "user": {
            "pk": 3044,
            "forum_user": {
                "id": 3042,
                "user": 3044,
                "first_name": "Gregoire",
                "last_name": "Lorieux",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/cd7913e7acfc03b53fbc5d9c30da67ce?s=120&d=retro",
                "biography": "Grégoire Lorieux is a composer, artistic director, and computer music designer, teaching at IRCAM. After studying early music and completing a master’s thesis on Kaija Saariaho, he studied composition with Philippe Leroux and at the Conservatoire de Paris, while joining IRCAM as a technology professor. In 2012, he took part in SPEAP at Sciences Po Paris with Bruno Latour, exploring connections between art, ecology, and social engagement. Active in education, he has led numerous projects combining creation and cultural outreach, such as IRCAM’s Ateliers de la Création and Paysages Composés with Ensemble Ars Nova and Quatuor Diotima. From 2013 to 2024, he was co-director of Ensemble Itinéraire. He taught electroacoustic composition at the Paris Conservatoire from 2019 to 2024. His musical language integrates electronics and French spectralism, exploring various formats from installations to concert works. In 2022, he founded Mondes Sonores, an open-air festival linking music and ecology.",
                "date_modified": "2026-02-27T15:38:40.219400+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 354,
                        "forum_user": 3042,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 25,
                                "membership": 354
                            },
                            {
                                "id": 599,
                                "membership": 354
                            },
                            {
                                "id": 655,
                                "membership": 354
                            },
                            {
                                "id": 781,
                                "membership": 354
                            },
                            {
                                "id": 917,
                                "membership": 354
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "lorieux",
            "first_name": "Gregoire",
            "last_name": "Lorieux",
            "bookmarks": []
        },
        "slug": "max-workshop-boulez-remake",
        "pk": 3560,
        "published": true,
        "publish_date": "2025-07-17T11:37:59+02:00"
    },
    {
        "title": "AURA - Junghyun Kim, Seungjun Oh",
        "description": "Cette œuvre d'art multimédia, composée d'œuvres d'art médiatique de Junghyun Kim et de musique expérimentale de Seungjun Oh, se penche sur l'essence de l'aura, de la nature et de la reproduction. Grâce à une expression musicale dynamique et à des images synchronisées, elle offre une expérience audiovisuelle cohérente, explorant la beauté toujours changeante de la nature.",
        "content": "<p><strong><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></strong></p>\r\n<p>Presented by:&nbsp;Junghyun Kim, Seungjun Oh<br /><a href=\"https://forum.ircam.fr/profile/junghyunkim/\" title=\"Biographie Junghyun Kim\">Biography&nbsp;<span>Junghyun Kim</span></a></p>\r\n<p><strong>[AURA]</strong></p>\r\n<p>La perte de l'aura.</p>\r\n<p>Selon le philosophe Walter Benjamin, l'aura n'existe que dans l'unicit&eacute; et l'authenticit&eacute; historiques que procurent la nature et les &oelig;uvres d'art, et l'aura se perd avec la reproduction des &oelig;uvres d'art. Cependant, depuis la r&eacute;volution industrielle, nous avons continu&eacute; &agrave; vivre dans ce monde de reproduction. L'&oelig;uvre unique de Van Gogh, &lt;Nuit &eacute;toil&eacute;e&gt;, est aujourd'hui expos&eacute;e non seulement dans de nombreux mus&eacute;es d'art, mais aussi dans des restaurants en tant que d&eacute;coration int&eacute;rieure. Le fait que l'&oelig;uvre de Gogh soit utilis&eacute;e comme objet d'exposition, objet de d&eacute;coration int&eacute;rieure et objet d'inspiration personnelle nous oriente vers un progr&egrave;s cr&eacute;atif. Malheureusement, cela d&eacute;truit le caract&egrave;re unique et l'aura de l'&oelig;uvre originale. Ainsi, avec l'augmentation de la valeur d'exposition, l'aura s'effondre.</p>\r\n<p><em>Mais est-ce n&eacute;gatif ?</em></p>\r\n<p></p>\r\n<p>C'est dans la nature que nous ressentons le plus l'aura. De nombreux artistes utilisent diff&eacute;rentes formes d'expression pour tenter de capturer l'aura de la nature. Dans ce cas, devons-nous consid&eacute;rer qu'il s'agit d'une [reproduction] de la nature dans le cadre d'une toile ? Ou s'agit-il de l'[Aura] d'une &oelig;uvre d'art ind&eacute;pendante ?</p>\r\n<p></p>\r\n<p>En commen&ccedil;ant ce projet &agrave; partir de ce point d'incertitude, j'avais l'intention de [reproduire] une autre forme d'aura &agrave; l'int&eacute;rieur de l'aura que l'on trouve dans [l'unicit&eacute;] de la nature.</p>\r\n<p>En s&eacute;lectionnant les &eacute;l&eacute;ments de la nature dans lesquels j'ai ressenti l'aura la plus forte, j'ai commenc&eacute; mon travail sur le coucher de soleil, la for&ecirc;t, la vague et le sable. Gr&acirc;ce &agrave; une recherche d&eacute;taill&eacute;e sur la couleur de chaque &eacute;l&eacute;ment, j'ai pr&eacute;vu d'exprimer leur aura. Cependant, la nature ne produit pas les m&ecirc;mes couleurs de mani&egrave;re r&eacute;p&eacute;titive. Leurs couleurs varient en fonction de l'environnement et des conditions de la nature. J'ai pr&eacute;vu d'exprimer cela plus largement par le biais de la gradation des couleurs.</p>\r\n<p>De plus, la nature ne reste pas immobile. Elle se d&eacute;place continuellement de mani&egrave;re fluide. Pour reproduire cela autant que possible, j'ai travaill&eacute; sur une vid&eacute;o afin de repr&eacute;senter au mieux les couleurs et les mouvements constants. Le soleil se couche lentement, l'&eacute;clat doux du coucher de soleil, le vent qui souffle dans la for&ecirc;t alors que vous l'observez de l'int&eacute;rieur, le mouvement tranquille de l'eau dans une mer ou une rivi&egrave;re calme, chaque grain qui compose le sable qui bouge. J'ai &eacute;tudi&eacute; le mouvement pr&eacute;sent dans chaque &eacute;l&eacute;ment et je me suis demand&eacute; comment le combiner avec mon travail sur la couleur.</p>\r\n<p></p>\r\n<p></p>\r\n<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8a4dcae6ba1d4c91f2dc45fd3a967f5f.jpg\" />&nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/05617ef999316fad6893d41240a17afd.jpg\" /><span>&nbsp;</span>&nbsp; &nbsp; &nbsp; &nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3ba1ba86ff03ff338fc0f99133f32b48.jpg\" />&nbsp; &nbsp; &nbsp; &nbsp;<span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b48ce1351c5d8f47b0702683314785b8.jpg\" />&nbsp; &nbsp; &nbsp; &nbsp;<span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b40e0a110707e5d165280cbeb32029de.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>Pour relier le concept d'art m&eacute;diatique &agrave; la musique exp&eacute;rimentale, nous explorons diverses m&eacute;thodes pour &eacute;voquer l'essence de l'aura, de la nature et de la reproduction &agrave; travers le son.</p>\r\n<p></p>\r\n<p>Tout comme les couleurs de la nature varient en fonction de l'environnement et des conditions, nous introduisons des changements de rythme, de tempo et de m&eacute;lodie pour refl&eacute;ter les aspects dynamiques et en constante &eacute;volution de la nature. Pour ce faire, nous utilisons des techniques de musique &eacute;lectronique telles que la synth&egrave;se de particules, o&ugrave; nous r&eacute;p&eacute;tons et manipulons de petits fragments sonores pour cr&eacute;er des textures et des motifs changeants. Pour souligner la notion d'unicit&eacute;, nous int&eacute;grons des &eacute;l&eacute;ments de variation et de spontan&eacute;it&eacute; dans la musique. En exp&eacute;rimentant les techniques al&eacute;atoires, nous introduisons des &eacute;l&eacute;ments impr&eacute;visibles dans la composition, refl&eacute;tant ainsi la nature impr&eacute;visible du monde naturel.</p>\r\n<p></p>\r\n<p>Nous am&eacute;liorons l'exp&eacute;rience visuelle de l'art m&eacute;diatique en synchronisant la musique avec la vid&eacute;o qui l'accompagne. En alignant les &eacute;v&eacute;nements musicaux sur les indices visuels tels que les changements de couleur, de mouvement ou de composition, nous cr&eacute;ons une narration audiovisuelle coh&eacute;rente. En tissant ensemble ces &eacute;l&eacute;ments, nous pouvons cr&eacute;er une exp&eacute;rience multim&eacute;dia qui r&eacute;sume l'essence de l'aura, de la nature et de la reproduction, incitant le public &agrave; contempler l'interconnexion de l'art, de l'authenticit&eacute; et de la beaut&eacute; toujours changeante de la nature.</p>\r\n<p></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1889,
                "name": "aura",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1146,
                "name": "experimental music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1888,
                "name": " media",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1705,
                "name": "nature",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 145,
                "name": "Visual effect",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 223,
                "name": "Visualize sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55111,
            "forum_user": {
                "id": 55049,
                "user": 55111,
                "first_name": "Junghyun",
                "last_name": "Kim",
                "avatar": "https://forum.ircam.fr/media/avatars/0E5D61C5-C7A5-479D-846E-0A51B8C08B23-6081-0000021C7C3469AE.JPG",
                "avatar_url": "/media/cache/03/50/0350e9836ce7f7481f3bc6a57de64f38.jpg",
                "biography": "Junghyun Kim(Zoey Kim) is a digital artist currently pursuing studies in Digital Direction at the Royal College of Art. With a background in audio-reactive media, Junghyun explores the intersection of sound and visuals in their creative practice. Prior to relocating to London for their studies, Junghyun gained valuable experience working as a producer in South Korea. This diverse background informs Junghyun's approach to art, blending technical expertise with a deep understanding of storytelling and multimedia production.",
                "date_modified": "2024-03-15T17:29:41.671154+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "junghyunkim",
            "first_name": "Junghyun",
            "last_name": "Kim",
            "bookmarks": []
        },
        "slug": "aura",
        "pk": 2825,
        "published": true,
        "publish_date": "2024-03-12T15:08:18+01:00"
    },
    {
        "title": "Cours accéléré sur la boîte à outils du CAQ - Omar Costa Hamido (OCH)",
        "description": "La boîte à outils de composition assistée par ordinateur quantique permet aux musiciens et aux artistes de construire, d'exécuter et de simuler des circuits quantiques dans Max.",
        "content": "<p><span><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /></span></p>\r\n<p><span>Pr&eacute;sent&eacute; par:&nbsp;Omar Costa Hamido (OCH)<br /><a href=\"https://forum.ircam.fr/profile/OCH/\">Biographie</a></span></p>\r\n<p><span></span></p>\r\n<p><span><img src=\"https://forum.ircam.fr/media/uploads/a40e3843af81ca9b2943c7cc29c8d107.png\" alt=\"\" width=\"983\" height=\"553\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p><span></span></p>\r\n<p><span>Dans ce cours acc&eacute;l&eacute;r&eacute;, j'aborderai la plupart des fonctionnalit&eacute;s de la bo&icirc;te &agrave; outils QAC, un paquetage Max gratuit (disponible sur le gestionnaire de paquetages) qui permet aux musiciens et aux artistes de construire, d'ex&eacute;cuter et de simuler des circuits quantiques &agrave; l'int&eacute;rieur de Max. Ce tutoriel sur la composition assist&eacute;e par ordinateur quantique (QAC) comprendra une br&egrave;ve introduction au sujet de l'informatique quantique, des diff&eacute;rentes portes quantiques, de la mani&egrave;re de constuire des circuits quantiques et de les modifier par programmation, et m&ecirc;me de la mani&egrave;re de les faire fonctionner sur du mat&eacute;riel quantique r&eacute;el. Le cours acc&eacute;l&eacute;r&eacute; se termine par un bref aper&ccedil;u de la communaut&eacute; en ligne&nbsp;<em>community.quantumland.art&nbsp;</em>et de la mani&egrave;re de continuer &agrave; s'impliquer dans le d&eacute;veloppement du domaine de la musique &agrave; l'aide de l'informatique quantique.&nbsp;</span></p>\r\n<p><span></span></p>\r\n<h3><span>Notes</span></h3>\r\n<ul>\r\n<li><span>Max doit &ecirc;tre <a href=\"https://cycling74.com\">install&eacute;</a> et vous devez avoir des connaissances de base pour travailler avec Max/MSP.</span></li>\r\n<li><span>La Salle Nono dispose d&eacute;j&agrave; d'ordinateurs de bureau avec tous les logiciels install&eacute;s, mais vous &ecirc;tes libre d'utiliser votre propre ordinateur.</span></li>\r\n<li>\r\n<p><span>Tous les patchs et le code d&eacute;velopp&eacute;s pendant cette session seront disponibles sur le repo github (voir les liens ci-dessous).</span></p>\r\n</li>\r\n</ul>\r\n<h3><span mce-data-marked=\"1\">Liens</span></h3>\r\n<ul>\r\n<li><a href=\"https://quantumland.art/qac\"><span>https://quantumland.art/qac</span></a></li>\r\n<li><a href=\"https://quantum.ibm.com/\"><span>https://quantum.ibm.com/</span></a></li>\r\n<li><a href=\"https://community.quantumland.art/\"><span>https://community.quantumland.art/</span></a></li>\r\n<li><a href=\"https://github.com/Quantumland-art/QACcrashcourse\"><span>https://github.com/Quantumland-art/QACcrashcourse</span></a></li>\r\n</ul>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1901,
                "name": "OCH",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1900,
                "name": "QAC",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1902,
                "name": "quantum",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 16651,
            "forum_user": {
                "id": 16648,
                "user": 16651,
                "first_name": "OCH",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/33a107cd1d1b247f037eda541c6d1c7e?s=120&d=retro",
                "biography": "OCH is a performer, composer, and technologist, working primarily in multimedia and improvisation. His current research is on quantum computing and music composition, telematics, and multimedia. He is passionate about emerging technology, cinema, teaching, and performing new works. He earned his PhD in Integrated Composition, Improvisation and Technology at University of California, Irvine with his research project Adventures in Quantumland (quantumland.art). He also earned his MA in Music Theory and Composition at ESMAE-IPP Portugal with his research on the relations between music and painting. In recent years, his work has been recognized with grants and awards from MSCA, Fulbright, Fundação para a Ciência e a Tecnologia, Medici, Beall Center for Art+Technology, and IBM. He is currently a Marie-Curie Fellow at CEIS20, University of Coimbra.",
                "date_modified": "2025-03-22T17:40:47.401950+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "OCH",
            "first_name": "OCH",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 277,
                    "user": 16651,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "the-qac-toolkit-crashcourse-omar-costa-hamido-och",
        "pk": 2832,
        "published": true,
        "publish_date": "2024-03-13T21:51:39+01:00"
    },
    {
        "title": "Natural Remedies for Managing Shift Work Sleep Disorder",
        "description": "The typical dosage of Modalert 200mg for managing SWSD is 200mg taken once daily, preferably one hour before the start of the work shift",
        "content": "<p>Shift Work Sleep Disorder (SWSD) is a common yet often overlooked condition that affects individuals working non-traditional hours. This article explores the impact of SWSD on health and well-being, along with natural remedies and lifestyle modifications that can help manage its symptoms. Additionally, the role of Modafinil, specifically <strong><a href=\"https://www.pills4cure.com/product/modalert-200mg/\">Modalert 200mg</a></strong>, in treating SWSD is discussed, including dosage guidelines and potential benefits. By understanding the complexities of SWSD and exploring a comprehensive approach to its management, shift workers can strive for better sleep quality and overall health.<br><br></p>\n<h2>1. Understanding Shift Work Sleep Disorder (SWSD)</h2>\n<p>&nbsp;</p>\n<h3>Definition of Shift Work Sleep Disorder</h3>\n<p>Shift Work Sleep Disorder (SWSD) is a circadian rhythm sleep disorder that commonly affects individuals who work non-traditional hours, such as night shifts or rotating shifts. It disrupts the body's natural sleep-wake cycle, leading to difficulties falling asleep, staying asleep, and achieving restorative sleep.<br><br></p>\n<h3>Causes and Risk Factors</h3>\n<p>SWSD can be caused by the misalignment between an individual's internal body clock and their work schedule. Factors such as exposure to artificial light at night, irregular meal times, and limited exposure to natural light during the day can contribute to the development of SWSD. Additionally, certain professions that require long or irregular hours, such as healthcare workers, first responders, and shift workers in manufacturing industries, are at a higher risk of experiencing SWSD.<br><br></p>\n<h2>2. Impact of SWSD on Health and Well-being</h2>\n<p>&nbsp;</p>\n<h3>Physical Effects of SWSD</h3>\n<p>Chronic sleep deprivation due to SWSD can lead to a variety of physical health issues, including increased risk of obesity, cardiovascular disease, diabetes, and gastrointestinal problems. Persistent fatigue and decreased immune function are also common physical effects of SWSD.<br><br></p>\n<h3>Mental Health Implications</h3>\n<p>SWSD has been linked to an increased risk of mood disorders, such as depression and anxiety, as well as cognitive impairment and decreased overall mental well-being. The emotional and psychological toll of SWSD can significantly impact an individual's quality of life.<br><br></p>\n<h3>Impact on Overall Productivity</h3>\n<p>The sleep disruptions caused by SWSD can impair cognitive function, decision-making abilities, and overall job performance. Shift workers with untreated SWSD may experience difficulties concentrating, communicating effectively, and maintaining a high level of productivity at work.<br><br></p>\n<h2>3. Natural Remedies for Managing SWSD Symptoms</h2>\n<p>&nbsp;</p>\n<h3>Sleep Hygiene Practices</h3>\n<p>Establishing a consistent sleep routine, creating a dark and quiet sleep environment, and limiting exposure to electronic devices before bedtime are essential sleep hygiene practices for managing SWSD symptoms. Maintaining a cool and comfortable bedroom temperature and using relaxation techniques, such as deep breathing or meditation, can also promote better sleep.<br><br></p>\n<h3>Dietary Recommendations for Better Sleep</h3>\n<p>Avoiding heavy meals, caffeine, and alcohol close to bedtime can help improve sleep quality for individuals with SWSD. Incorporating foods rich in tryptophan, magnesium, and melatonin, such as turkey, nuts, seeds, and tart cherries, into the diet may also support better sleep patterns.<br><br></p>\n<h3>Herbal Supplements and Remedies</h3>\n<p>Certain herbal supplements, such as valerian root, chamomile, and lavender, have been traditionally used to promote relaxation and improve sleep quality. Consulting with a healthcare provider or a qualified herbalist before incorporating herbal remedies into your routine is recommended to ensure safety and efficacy.<br><br></p>\n<h2>4. Importance of Healthy Sleep Habits for Shift Workers</h2>\n<p>&nbsp;</p>\n<h3>Creating a Sleep-friendly Environment</h3>\n<p>Optimizing the bedroom environment for sleep by minimizing light exposure, reducing noise levels, and investing in a comfortable mattress and pillows can help shift workers improve their sleep quality. Using blackout curtains or eye masks to block out light and white noise machines to mask disruptive sounds can create a more conducive sleep environment.<br><br></p>\n<h3>Establishing a Consistent Sleep Schedule</h3>\n<p>Maintaining a consistent sleep schedule, even on days off, can help regulate the body's internal clock and improve sleep continuity for shift workers with SWSD. Establishing a bedtime routine that signals to the body it is time to wind down, such as taking a warm bath, reading a book, or practicing relaxation techniques, can aid in falling asleep more easily and staying asleep longer.### 5. Role of Modafinil (Modalert 200mg) in SWSD Management<br><br></p>\n<h3>Mechanism of Action:</h3>\n<p>Modafinil, sold under the brand name Modalert 200mg, works by targeting certain neurotransmitters in the brain that regulate sleep and wakefulness. It helps promote alertness and reduce excessive daytime sleepiness associated with Shift Work Sleep Disorder (SWSD).<br><br></p>\n<h3>Efficacy in Treating SWSD Symptoms:</h3>\n<p>Studies have shown that Modalert 200mg is effective in improving wakefulness and cognitive function in individuals with SWSD. It can help shift workers stay more alert during their work hours and improve their overall quality of life by managing sleep disturbances.<br><br></p>\n<h2>6. Dosage and Usage Guidelines for Modalert 200mg</h2>\n<p>&nbsp;</p>\n<h3>Recommended Dosage for Shift Workers:</h3>\n<p>The typical dosage of Modalert 200mg for managing SWSD is 200mg taken once daily, preferably one hour before the start of the work shift. It's important to follow the prescribed dosage and timing to maximize the benefits of the medication.<br><br></p>\n<h3>Potential Side Effects and Precautions:</h3>\n<p>Common side effects of Modalert 200mg may include headache, nausea, nervousness, and insomnia. It's essential to consult with a healthcare provider before starting Modalert to discuss any potential side effects or interactions with other medications. Regular monitoring and communication with a healthcare professional can help manage any side effects effectively.<br><br></p>\n<h2>7. Combining Natural Remedies with Modalert for Enhanced Results</h2>\n<p>&nbsp;</p>\n<h3>Integrating Modalert with Natural Approaches:</h3>\n<p>While Modalert 200mg can be effective in managing SWSD symptoms, combining it with natural remedies like maintaining a consistent sleep schedule, creating a sleep-conducive environment, and practicing relaxation techniques can further enhance the overall effectiveness of the treatment.<br><br></p>\n<h3>Maximizing the Benefits of Combined Therapies:</h3>\n<p>By integrating Modalert with natural approaches such as regular exercise, healthy nutrition, and stress management techniques, individuals with SWSD can optimize their treatment outcomes. It's important to create a holistic approach to managing SWSD by combining medication with lifestyle modifications for long-term success. In conclusion, implementing a combination of natural remedies, healthy sleep habits, and, when necessary, medication like <strong><a href=\"https://www.pills4cure.com/product/modalert-200mg/\">Modalert 200mg</a></strong>, can significantly improve the quality of sleep and overall well-being for individuals struggling with Shift Work Sleep Disorder. By prioritizing proper rest and exploring various treatment options, individuals can better manage the challenges of working non-traditional hours and strive for a healthier, more <a href=\"https://forum.ircam.fr/\">balanced</a> lifestyle.</p>",
        "topics": [],
        "user": {
            "pk": 99792,
            "forum_user": {
                "id": 99667,
                "user": 99792,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/241edfc46d12a99ba4e325630dc974fa?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-01-24T08:01:23.701981+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "almasmith2930",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "natural-remedies-for-managing-shift-work-sleep-disorder",
        "pk": 3226,
        "published": false,
        "publish_date": "2025-01-24T08:10:51.230150+01:00"
    },
    {
        "title": "Un atelier de synthèse audio générative - Sinan Bokesoy",
        "description": "Une démonstration pratique de diverses techniques de synthèse audio générative - Explorez l'art et la science de la création de paysages sonores qui évoluent d'eux-mêmes.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par: Sinan Bokesoy<br /><a href=\"https://forum.ircam.fr/profile/SinanBokesoy/\">Biographie</a></p>\r\n<p><br />La synth&egrave;se aidop g&eacute;n&eacute;rative est un processus dans lequel le son est automatiquement cr&eacute;&eacute; ou modifi&eacute; par un syst&egrave;me selon un ensemble d'algorithmes ou de r&egrave;gles. Cette technique implique des algorithmes proc&eacute;duraux qui conduisent &agrave; des paysages sonores, des effets audio et des structures qui &eacute;voluent de mani&egrave;re autonome.&nbsp;</p>\r\n<p><span>Dans cet atelier, nous pr&eacute;senterons diff&eacute;rents mod&egrave;les efficaces pour cr&eacute;er des textures sonores &eacute;voluant de mani&egrave;re autonome. Pour un examen approfondi, nous proposerons des d&eacute;monstrations pratiques de g&eacute;n&eacute;ration de textures sonores polyphoniques ;</span></p>\r\n<p>- Des textures sonores polyphoniques</p>\r\n<p>- Des mod&egrave;les probabilistes qui g&eacute;n&egrave;rent des modulations audio complexes et des donn&eacute;es midi continues,</p>\r\n<p>- D<span>es structures de formes d'ondes qui s'organisent d'elles-m&ecirc;mes,</span></p>\r\n<p>-&nbsp;Des<span>&nbsp;synth&egrave;ses sonores d&eacute;riv&eacute;es de sc&egrave;nes de simulation, telles que l'espace gravitationnel ou les ondes oc&eacute;aniques.</span></p>\r\n<p>- Interactions sonores avec des architectures 3D pr&eacute;sentant des donn&eacute;es hors du temps.</p>\r\n<p>Dirig&eacute; par&nbsp;<a href=\"https://www.sonic-lab.com\">sonicLAB</a>/<a href=\"https://www.sonicplanet.com\">sonicPlanet</a>,&nbsp;<span>avec plus d'une d&eacute;cennie d'exp&eacute;rience dans la fourniture d'outils audio g&eacute;n&eacute;ratifs &agrave; l'industrie audio, cet atelier est l'occasion de se plonger dans l'art et la science de la g&eacute;n&eacute;ration de paysages sonores qui &eacute;voluent de mani&egrave;re autonome.</span></p>\r\n<p></p>\r\n<p><span><br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3330df9bfdc336f0eca82c8bec92d055.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;<strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement </a></strong></p>",
        "topics": [
            {
                "id": 396,
                "name": "Audio-visual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 272,
                "name": "Generative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 524,
                "name": "Design et traitement sonores",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 15446,
            "forum_user": {
                "id": 15443,
                "user": 15446,
                "first_name": "Sinan",
                "last_name": "Bokesoy",
                "avatar": "https://forum.ircam.fr/media/avatars/sinanportre_png.png",
                "avatar_url": "/media/cache/91/1d/911d705a8e8a4fc32df04be63c997ed8.jpg",
                "biography": "Sinan Bokesoy is an engineer, developer, and sound artist with a PhD in computer music. As the founder of sonicLAB/sonicPlanet, he has transformed his academic expertise into practical tools for composers and producers, designing software instruments that integrate algorithmic approaches with mathematical models and physical processes to create self-evolving sonic structures. Bokesoy’s work has been published and presented at numerous academic institutions and artistic events. Recognized with awards for his innovative developments, he bridges artistic creativity, scientific exploration, and technological innovation—carving out a niche in the audio tech industry.",
                "date_modified": "2026-03-02T17:03:48.699325+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "SinanBokesoy",
            "first_name": "Sinan",
            "last_name": "Bokesoy",
            "bookmarks": []
        },
        "slug": "a-generative-audio-synthesis-workshop",
        "pk": 2808,
        "published": true,
        "publish_date": "2024-03-06T14:43:39+01:00"
    },
    {
        "title": "News from the S3AM team (by Thomas Hélie with Stefan Bilbao, Charles Picasso, Thomas Risse)",
        "description": "News about available tools & current research.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>In this presentation, we will present news about:</p>\r\n<p>1) Available tools (MAX/MSP) based on physics:</p>\r\n<p>- nonlinear string</p>\r\n<p>- bowed string</p>\r\n<p>- nonlinear modal synthesis</p>\r\n<p>- BrassyFx</p>\r\n<p>2) Current research:</p>\r\n<p>- Project on Interactive Analysis/Synthesis of Musical Timbre</p>\r\n<p>- Active Control of the Piano</p>\r\n<p>- Energy-based Physical modelling</p>",
        "topics": [],
        "user": {
            "pk": 18359,
            "forum_user": {
                "id": 18352,
                "user": 18359,
                "first_name": "Thomas",
                "last_name": "Helie",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f1890bd9d8d8ef5dc06f3accb2692adf?s=120&d=retro",
                "biography": "T. Hélie is Director of Research at CNRS. He is the head of the S3AM team at STMS laboratory  hosted at IRCAM and coordinator of the ATIAM MSc (Sorbonne Université). His field of research is on nonlinear dynamical systems, control theory, signal processing, acoustics, physical modeling of audio/musical instruments and voice. He has co-authored more than 120 publications in journals or proceedings, filled 2 patents, has been involved in several collaborative projects and currently coordinates 2 of them. He has supervised more than 10 PhD students and 30 MSc students. He has been a board member of the SMAER doctoral school since 2018, and involved in councils of the French Acoustic Society (Mission leader for the \"Olympiades de Physique France\", Musical Acoustics Group: elected member since 2011; Speech Acoustics Group: elected member since 2016). He was elected representative of researchers at STMS (2006-19) and is the STMS contact of the INS2I innovation unit (since 2021). One of his patented inventions will become one of the 5 manip'Icons of the Palais de la Découverte 2025, Paris.",
                "date_modified": "2026-02-16T13:32:55.998309+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 244,
                        "forum_user": 18352,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "helie",
            "first_name": "Thomas",
            "last_name": "Helie",
            "bookmarks": []
        },
        "slug": "news-from-the-s3am-team-by-thomas-helie-with-stefan-bilbao-charles-picasso-thomas-risse",
        "pk": 4364,
        "published": true,
        "publish_date": "2026-02-16T13:35:32+01:00"
    },
    {
        "title": "Conception sonore de terres urbaines - composer dans (l'intérieur de) l'existant",
        "description": "Résidence en recherche artistique 2018.19. \r\nNadine Schütz.\r\nAu sein des équipes Espaces acoustiques et cognitifs et Perception et design sonores de l'Ircam-STMS.",
        "content": "<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>R&eacute;sidence en recherche artistique 2018.19</h3>\r\n<p><strong>Conception sonore de terres urbaines - composer dans (l'int&eacute;rieur de) l'existant</strong><br />Au sein des &eacute;quipes<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac/\">Espaces acoustiques et cognitifs</a>&nbsp; et<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/pds/\">Perception et design sonores</a><span>&nbsp;</span>de l'Ircam-STMS</p>\r\n<p>Les sons sont une dimension intrins&egrave;que de la relation entre les hommes et leur environnement. Alors que la pratique commune de l'am&eacute;nagement urbain se concentre uniquement sur le traitement d&eacute;fensif des sons ind&eacute;sirables, nous devrions plut&ocirc;t nous pr&eacute;parer &agrave; la conception active de qualit&eacute;s acoustiques pour les espaces publics, qui devient de plus en plus important : au-del&agrave; des notions abstraites de &laquo; bruit&nbsp;&raquo; et de &laquo; silence &raquo;, le son peut contribuer &agrave; une riche exp&eacute;rience environnementale, en offrant de l'espace pour l'imagination, la communication et la polyphonie urbaine. Ma proposition de r&eacute;sidence rel&egrave;ve de ce contexte. Son objectif artistique est en m&ecirc;me temps un objectif op&eacute;rationnel : Afin de promouvoir l'int&eacute;gration du son dans les projets de paysage urbain, le d&eacute;veloppement d'outils de conception respectifs joue un r&ocirc;le cl&eacute;. Concevoir, composer dans un environnement urbain ou paysager, implique toujours de travailler avec l'identit&eacute; donn&eacute;e d'un site, sa structure physique, ses conditions sociales, ses constellations sonores et ses caract&eacute;ristiques acoustiques. L'enjeu en ce qui concerne l'exploitation artistique et la mise en &oelig;uvre de telles observations d&eacute;passe l&rsquo;analyse : il implique la composition sonore et la simulation acoustique, et demande que les m&eacute;thodes respectives de pr&eacute;figuration et d'&eacute;valuation soient int&eacute;gr&eacute;es dans le processus de conception.</p>\r\n<p>Les technologies sophistiqu&eacute;es d'investigation des spatialit&eacute;s acoustiques et cognitives d&eacute;velopp&eacute;es par l'&eacute;quipe Espaces acoustiques et cognitifs de l'Ircam fournissent des outils pertinents pour la cr&eacute;ation artistique et le d&eacute;veloppement de projets dans ce domaine, qui vont au-del&agrave; de l'&eacute;tat de l'art. L'int&eacute;gration de nouveaux sons dans un contexte urbain existant implique en outre d'importants aspects s&eacute;mantiques, qui seront abord&eacute;s avec l'&eacute;quipe Perception et design sonore&raquo;, bas&eacute;e sur son travail sur la cat&eacute;gorisation lexicale et morphologique des sons environnementaux. La r&eacute;sidence se concentrera sur la combinaison de ces approches autour d'un projet concret, le<span>&nbsp;</span><em>Canopy of Reflections</em>, qui sera r&eacute;alis&eacute; en 2019-2021 dans le cadre de la r&eacute;novation de la Place de la D&eacute;fense. Le but est l'&eacute;laboration d'un prototype audible de l'installation, une simulation spatiale de l'exp&eacute;rience sonore que le projet r&eacute;alis&eacute; offrira aux futurs utilisateurs / visiteurs de La D&eacute;fense.</p>\r\n<h1><span>Nadine Sch&uuml;tz</span></h1>\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202019/.thumbnails/nadine_schutz.jpg/nadine_schutz-135x135.jpg\" alt=\"person\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Le travail de Nadine Sch&uuml;tz<span>&nbsp;</span><em>(((Echora)))</em><span>&nbsp;</span>tient du paysage et de l'architecture, de l&rsquo;acoustique environnementale, de la musique et de la psychoacoustique. S&rsquo;appuyant sur une recherche &agrave; la fois th&eacute;orique et po&eacute;tique, elle explore &agrave; diff&eacute;rentes &eacute;chelles et dans le cadre de diff&eacute;rentes commandes la dimension sonore de l&rsquo;espace, &agrave; travers des installations sonores et un travail sur les ambiances acoustiques qui mettent en relation l&rsquo;urbain et l&rsquo;humain, la musique et le paysage.</p>\r\n<p>Pendant quatre ans Nadine Sch&uuml;tz a dirig&eacute; le laboratoire multim&eacute;dia de l&rsquo;institut du paysage &agrave; la prestigieuse &eacute;cole polytechnique de Zurich (ETH), aux c&ocirc;t&eacute;s de Christophe Girot. En 2013 sa d&eacute;marche de conception acoustique paysag&egrave;re est couronn&eacute; du prix Young Researchers Thinking the Contemporary Landscape de la Fondation Volkswagen. Son travail a &eacute;t&eacute; pr&eacute;sent&eacute; au Mus&eacute;e Migros d'art contemporain (Zurich), au Mus&eacute;e d'Art Contemporain de Moscou, au 3331 Arts Chiyoda (Tokyo) et pendant la Longue Nuit des Mus&eacute;es de Zurich. Parmi ses projets en cours, une installation sonore pour le parvis du nouveau Palais de Justice &agrave; la Porte de Clichy &agrave; Paris avec Moreau Kusunoki, un parcours de paysage sonore pour le nouveau Franchissement Urbain Pleyel de Marc Mimram &agrave; Saint-Denis, une sc&eacute;nographie sur la vie des rues au S&eacute;n&eacute;gal, une commande du Kyoto Institute of Technology au Japon pour interroger la dimension sonore du jardin japonais traditionnel, et la collaboration avec BASE paysagistes sur la Place de La D&eacute;fense. En 2017 elle a finalis&eacute; son doctorat en sciences sur La Dimension Sonore du Paysage &agrave; l&rsquo;ETH Zurich, o&ugrave; elle a &eacute;galement install&eacute; un nouveau laboratoire de simulation acoustique paysag&egrave;re.</p>\r\n<p>Nadine Sch&uuml;tz, n&eacute;e en 1983 en Suisse, vit et travaille &agrave; Paris et Zurich. Architecte diplom&eacute;e de l&rsquo;ETH Zurich, elle a &eacute;galement suivi l&rsquo;enseignement en acoustique et musique du Signal and Information Processing Laboratory (ETH) et de l&lsquo;Institute for Computer Music and Sound Technology (ICST) &agrave; l&rsquo;universit&eacute; des arts &agrave; Zurich (ZHdK). Ainsi, elle a rassembl&eacute; un ensemble polyvalent de comp&eacute;tences conceptuelles, artistiques et techniques qui lui permettent d'op&eacute;rer vers un nouveau domaine de l'art environnemental que l'on pourrait appeler le travail d'un architecte du son.</p>\r\n</div>\r\n<p><strong>Courriel :</strong><span>&nbsp;</span>nadine.schutz (at) ircam.fr</p>\r\n<ul class=\"unstyled-list\">\r\n<li class=\"mb1\"><strong>&Eacute;quipe :<span>&nbsp;</span></strong><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac/\">Espaces acoustiques et cognitifs</a></li>\r\n</ul>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"https://www.echora.ch/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>https://www.echora.ch/</a></li>\r\n<li><a href=\"http://girot.arch.ethz.ch/current-staff/nadine-schuetz\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://girot.arch.ethz.ch/current-staff/nadine-schuetz</a>\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<figure class=\"person-list-box__image profile\"></figure>\r\n</div>\r\n</li>\r\n</ul>\r\n</div>",
        "topics": [
            {
                "id": 43,
                "name": "EAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "conception-sonore-de-terres-urbaines-composer-dans-linterieur-de-lexistant",
        "pk": 17,
        "published": true,
        "publish_date": "2019-03-20T17:21:24+01:00"
    },
    {
        "title": "\"Liberated frequencies\" by Keigo Yoshida",
        "description": "\"liberated frequencies\" redefines auditory pleasure by freeing AI from human-centric aesthetics. In advance, glitch, noise, voice, and experimental sounds were rated by a subject based on perceived pleasure. During this A/V Performance, AI learns from the most highly rated sounds and generates evolving soundscapes. The subject wears EEG sensors measuring theta waves (4–8 Hz), linked to auditory pleasure. When brain activity indicates increased pleasure, the AI disrupts it—altering pitch, tempo, and rhythm to deviate from the subject’s preferences. This creates a feedback loop that challenges the boundaries of comfort, asking whether such “liberated” sounds disturb or expand our auditory experience. We will be performing at IRCAM Forum Workshops 2026 in Paris and Enghien-les-Bains.",
        "content": "<h5><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<p><em></em></p>\r\n<p><em>liberated frequencies -&nbsp;</em>explores unprecedented soundscapes that defy our traditional auditory pleasures by \"liberating\" AI from the limitations of human-defined &lsquo;pleasing'.<br />Before the production, our team gathered glitch, experimental, voice and noise sounds, which a subject later rated based on the pleasure they evoked. During performance, the AI continuously learns in real-time from the highest-rated sounds. Utilizing this sound data, the AI predicts and generates the subsequent auditory experiences, creating an evolving and immersive soundscape.<br />The subject in the soundscape wears EEG sensors that measure real-time theta waves (4-8 Hz) of brain activity.&nbsp; According to Sammler et al. (2007), increased activity in this frequency band is typically associated with intensified auditory pleasure. However, in response to this heightened brain-based pleasure, the AI&mdash;continuously learning from the real-time EEG data&mdash;intentionally disrupts the experience. It transforms the generated sounds, subtly altering pitches, waveforms, tempos and syncopations, gradually diverging from the original sound patterns the subject found&nbsp;pleasurable.</p>\r\n<p>This deliberate shift invites the viewer to explore the boundaries of discomfort, challenging the conventional auditory aesthetics inherently favored by human perception. Do these deliberately 'liberated' sounds merely traumatize the human senses, or do they open a gateway to new auditory expressions and possibilities?</p>\r\n<p>github:&nbsp;<a href=\"https://github.com/keigoyoshida7/liberated-frequencies\"><span>https://github.com/keigoyoshida7/liberated-frequencies</span></a></p>\r\n<p>HP:&nbsp;<a href=\"https://keigoyoshida.jp/room20.html\"><span>https://keigoyoshida.jp/room20.html</span></a></p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3914b9341c67554e9cb7e2de1a08953a.png\" width=\"1209\" height=\"676\" /></span></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/fb52737e57fa7d2f92842d524b502eb4.png\" width=\"768\" height=\"585\" /></p>",
        "topics": [
            {
                "id": 3462,
                "name": "AI & Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3465,
                "name": "auditory pleasure",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3463,
                "name": "EEG",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3461,
                "name": "Improvised Generative Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3464,
                "name": "Theta wave",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 122344,
            "forum_user": {
                "id": 122180,
                "user": 122344,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/press-photo.jpg",
                "avatar_url": "/media/cache/d4/16/d416b58e1caadfe8dea60bd812255263.jpg",
                "biography": "Keigo Yoshida is an artist and scientist affiliated with Center for Music Neuroscience at Keio University Graduate school of Media and Governance. He explores music through the perspectives of neuroscience and computer science as machine learning, integrating insights into various forms of artistic expression, including audiovisual works, installations, and musical compositions.\n\nHis notable works include Propagation (A/V performance), Mineral Neurons (A/V performance) at Sónar+D, liberated frequencies (A/V performance and installation in collaboration with METI and Rhizomatiks), Reservoir Audio Visual Performance (presented at TEDx KeioU Conference), and Artificial Heart Brain (a project from Keio University's Data-Driven Class, Daito Manabe Grand Prize). Additionally, he worked on Hanamizuki Reworked, feat. Yo Hitoto.\nAs a VJ, he performs in Radio Sakamoto Uday for SE SO NEON and TOWA TEI.\n\nBeyond his creative endeavors, he has actively contributed to the field of music neuroscience. He performed an AI-driven showcase at Tsukuba Conference For Future Shapers 2023 and presented his research at The Neurosciences and Music - VIII in Helsinki, Finland.",
                "date_modified": "2026-03-02T06:52:37.972899+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1439,
                        "forum_user": 122180,
                        "date_start": "2026-03-16",
                        "date_end": "2027-03-16",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "keigoyoshida",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3750,
                    "user": 122344,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4366,
                    "user": 122344,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4367,
                    "user": 122344,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4147,
                    "user": 122344,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "liberated-frequencies-by-keigo-yoshida",
        "pk": 4147,
        "published": true,
        "publish_date": "2026-01-08T08:42:18+01:00"
    },
    {
        "title": "Peur de posséder un corps - Lena Meinhardt, Eva Dörr",
        "description": "Une composition à 8 canaux basée sur RAVE, inspirée d'un poème d'Emily Dickinson.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par : Lena&nbsp;Meinhardt<br /><a href=\"https://forum.ircam.fr/profile/MariaRose/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">La composition Afraid to own a body a &eacute;t&eacute; compos&eacute;e pour la Long Night of Transitions et pr&eacute;sent&eacute;e au Kunstmuseum Stuttgart dans le cadre de l'exposition Shift KI. Il s'agit d'une composition de 8 canaux de m&eacute;dia fixe bas&eacute;e sur un po&egrave;me d'Emily Dickinson datant d'environ 1866.</p>\r\n<p style=\"text-align: justify;\"><br />Pour g&eacute;n&eacute;rer notre mat&eacute;riel sonore, nous avons travaill&eacute; avec le logiciel IRCAM de l'&eacute;quipe RAVE. Nous avons entra&icirc;n&eacute; notre propre mod&egrave;le &agrave; apprendre nos deux couleurs de voix. Nous avons trouv&eacute; les \"fractures\" particuli&egrave;rement int&eacute;ressantes. Il s'agit de moments o&ugrave; le mod&egrave;le ne sait pas exactement ce qu'il doit faire. Ainsi que les diff&eacute;rents stades d'apprentissage du mod&egrave;le.<br />En ce qui concerne la forme de la composition, nous voulions tracer une ligne de d&eacute;marcation entre quelque chose qui se d&eacute;veloppe et se d&eacute;sagr&egrave;ge &agrave; nouveau. Seul du mat&eacute;riel vocal a &eacute;t&eacute; utilis&eacute;.</p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p><img src=\"/media/uploads/20231028_105910_lenaeva_portr_n_-_lena_meinhardt.jpg\" alt=\"\" width=\"192\" height=\"287\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1740,
                "name": "8 channel",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1739,
                "name": "fixed media ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1741,
                "name": "poem",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 79,
            "forum_user": {
                "id": 79,
                "user": 79,
                "first_name": "Lena",
                "last_name": "Meinhardt",
                "avatar": "https://forum.ircam.fr/media/avatars/20231028_105910_LenaEva_Portr_n-SMALL-Zuschnitt.jpg",
                "avatar_url": "/media/cache/45/d6/45d674ea6f68b7e41416812a4e070c27.jpg",
                "biography": "In Lena Meinhardt's compositions, recordings of places, objects or texts take on a life of their own through sound synthesis. Together with Eva Dörr, they create interdisciplinary and context-related works. Eva Dörr's artistic focus is on (sound) installations and videos. It focuses on the acoustic perception of spaces and places that are mostly marginal.\nLena Meinhardt and Eva Dörr have been working together as an artist duo since 2019. Their work occurs in the field of sound installation.",
                "date_modified": "2026-01-06T19:31:53.782602+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1103,
                        "forum_user": 79,
                        "date_start": "2026-03-25",
                        "date_end": "2027-03-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 778,
                                "membership": 1103
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "MariaRose",
            "first_name": "Lena",
            "last_name": "Meinhardt",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 79,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "afraid-to-own-a-body",
        "pk": 2719,
        "published": true,
        "publish_date": "2024-02-13T10:01:29+01:00"
    },
    {
        "title": "Xinghai EIE Studio Presentation by Marco Bidin & students",
        "description": "Online presentations and demo performances from the Electronic Instrument Engineering students of the Xinghai Conservatory of Music. Introduction and moderation by Marco Bidin.",
        "content": "<h2>Xinghai EIE Studio Presentation</h2>\r\n<h2>- Marco Bidin &amp; students</h2>\r\n<p style=\"font-weight: 400;\">The integration of cutting-edge technology in the field of audio and visual interaction has opened up new frontiers for immersive experiences. In this article, we will explore the innovative projects presented by Bin Yuan (Tyler), Zilu Li, and Zheng Yizhong from the Electronic Instrument Engineering students of the Xinghai Conservatory of Music, each of which showcases the potential for transformative experiences in the realm of sound and spatial interaction.&nbsp;</p>\r\n<p style=\"font-weight: 400;\"><img src=\"/media/uploads/bidin_1.png\" alt=\"\" width=\"724\" height=\"463\" /></p>\r\n<p style=\"font-weight: 400;\"><span>Bin Yuan's&nbsp;<strong>T-Voice&nbsp;</strong>introduces the 4-channel surround voice effector, leveraging the Classic Vocoder to deliver a truly immersive sound experience. This effector goes beyond conventional audio manipulation, incorporating stutter sounds with stutter effects linked with Grainfloat Reverb to create ethereal effects. The Classic Vocoder plays a pivotal role in altering sound features for tone and volume control, while GrainflowHarmonize provides fine pitch control and harmonics. Moreover, the GrainflowRecipe replays sound, adjusting signals for balanced stereo reverb. The integration of a multi-harmonizer further enhances the stability of input/output signals, utilizing pitch shift for pitch adjustments. Notably, the background music, composed in Ableton Live, employs advanced techniques such as Grain Delay, Glue Compressor, and Reverb to enhance the theme of spiritual reflection and life changes.&nbsp;</span></p>\r\n<p style=\"font-weight: 400;\"><img src=\"/media/uploads/bidin_2.png\" alt=\"\" width=\"728\" height=\"494\" /></p>\r\n<p style=\"font-weight: 400;\">Moving on to Zilu Li's<strong>&nbsp;Full-Sense Interactive Paint Control System</strong>, we encounter an innovative approach that intertwines brush actions with real-time control of MIDI pitch, audio filtering, visual, and precise spatial positioning in surround sound settings. By linking brush colors to sound zones, the system automatically adjusts audio outputs as colors change, thereby enhancing the musical expressiveness of the artwork. The system's support for various brush shapes and sizes empowers users to tailor their artistic expression based on their unique style and technical requirements, thereby enriching the creative process. With five independent channels, each equipped with reverb and filtering capabilities, the system delivers immersive spatial audio effects that elevate the overall artistic experience.</p>\r\n<p style=\"font-weight: 400;\"><img src=\"/media/uploads/bidin_3.jpg\" alt=\"\" width=\"741\" height=\"555\" /></p>\r\n<p style=\"font-weight: 400;\">Zheng Yizhong's<strong>&nbsp;Interactive twin coordinate displacement device for surround sound&nbsp;</strong>represents a significant leap in personalized audio and visual interaction experiences. By capturing the user's coordinate position through Microsoft Azure Kinect and utilizing touch designer software to receive real-time coordinate data, the system sets its capture area to 5 meters in length and 2.5 meters in width. Subsequently, it sends displacement coordinates and bone points to Max MSP through OSC signals, enabling real-motion capture for two-person interaction. This functionality allows the system to track the players and adjust the surround sound output accordingly, resulting in a highly immersive and personalized audio and visual interaction experience.</p>\r\n<p style=\"font-weight: 400;\"><span>&nbsp;</span></p>\r\n<p style=\"font-weight: 400;\"><span>In conclusion, these pioneering projects exemplify the convergence of technology and creativity, offering a glimpse into the future of immersive sound and spatial interaction. As these innovations continue to evolve, they hold the potential to revolutionize the way we perceive and interact with audio and visual experiences, ushering in a new era of multi-sensory engagement.</span></p>\r\n<p style=\"font-weight: 400;\"><span><img src=\"/media/uploads/bin_yuan.jpg\" alt=\"\" width=\"550\" height=\"550\" /></span></p>\r\n<p style=\"font-weight: 400;\"><span><br /><strong>Bin Yuan (Tyler)</strong>, a Xinghai Conservatory of Music student, specializes in ambient music production. He excels in software synthesizers and uses Max/MSP for sound design. Tyler participated in the 2022 Guangzhou International Musical Instruments Exhibition and was a Brand Ambassador at the 2023 Shanghai International Musical Instruments Exhibition. He presented his sequencer work online at the 2024 IRCAM Forum and performed live at the 2024 Beijing \"Exchange Methods\" event. Dedicated to exploring new music technology with \"<em>Stay hungry, stay foolish</em>.&rdquo;</span></p>\r\n<p style=\"font-weight: 400;\"><img src=\"/media/uploads/fille_d&eacute;tour&eacute;e.jpg\" alt=\"\" width=\"517\" height=\"520\" /></p>\r\n<p style=\"font-weight: 400;\"><strong><span>Zilu Li</span></strong><span>&nbsp;is a Bachelor's student focusing on Music Production and Interactive Design. She was a MIDI Promotion Ambassador at the International Musical Instruments Exhibition in Shanghai (2023), in charge of presentations and demonstrations. She participated several times as a staff member at the Guangzhou International Musical Instruments Exhibition, performing live and presenting interactive art installations.&nbsp;</span></p>\r\n<p style=\"font-weight: 400;\"><img src=\"/media/uploads/fille_1.jpg\" alt=\"\" width=\"537\" height=\"535\" />&nbsp;</p>\r\n<p style=\"font-weight: 400;\"><strong><span>Yizhong Zheng</span></strong><span>&nbsp;is a student from the Xinghai Conservatory of Music, majoring in electronic musical instrument engineering and interested in interaction design. In March 2023, she presented her original Kontakt sampling plug-in works and lectures at the IRCAM Forum in Paris. In April 2024, she participated in the surround sound production of artworks at the 2024 Italian Biennale and in June in the Beijing Synthesizer Communication Works Exhibition. She participated in the Shanghai International Musical Instrument and Guangzhou Musical Instrument Exhibition. Yizhong focuses on creating innovative works and developing new technologies.&nbsp;</span></p>\r\n<p style=\"font-weight: 400;\"><strong><span>Panel coordinator: Prof Marco Bidin.</span></strong></p>\r\n<p style=\"font-weight: 400;\"><strong><span>EIE PRESENTATION SUPERVISORS</span></strong></p>\r\n<p style=\"font-weight: 400;\"><span>Prof<strong>&nbsp;Hao Yinan</strong>, Deputy Director of the Department of Musical Instrument Engineering.</span></p>\r\n<p style=\"font-weight: 400;\"><span>Mr&nbsp;<strong>Wu Zhou</strong>, EIE Studio director.</span></p>\r\n<p style=\"font-weight: 400;\"><strong><span>EIE at Xinghai Conservatory of Music</span></strong></p>\r\n<p style=\"font-weight: 400;\"><span>The&nbsp;<strong>Electronic Instrument Engineering</strong>&nbsp;Department was established in 2016. It is the youngest department of the Department of Musical Instrument Engineering of Xinghai Conservatory of Music and began its activities in the same year. In the context of most of the country's electronic musical instrument manufacturing enterprises located in Guangdong, the birth of electronic musical instrument design and research and development is inevitable.&nbsp;After unremitting efforts in recent years, the department's teaching and scientific research have gradually embarked on a standardised, systematic, and scientific development stage. The professional teaching and research department's main courses currently include digital synthesisers, sound effects, sampling, MIDI controllers, sequencers, hardware design and production, interactive apps, art installations, and prototype design.&nbsp;The&nbsp;<strong>Xinghai Conservatory of Music</strong>&nbsp;is a higher music education institution in Guangzhou City, Guangdong Province, China. It is named after the famous composer Xian Xinghai and was established in 1932 by the composer Ma Sicong as the Guangzhou Conservatory of Music.&nbsp;</span></p>\r\n<p><span style=\"font-weight: 400;\"><a href=\"https://yqgc.xhcom.edu.cn/index.jsp?urltype=tree.TreeTempUrl&amp;wbtreeid=1001\"><span>https://yqgc.xhcom.edu.cn/index.jsp?urltype=tree.TreeTempUrl&amp;wbtreeid=1001</span></a></span></p>",
        "topics": [
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1850,
                "name": "interactive music system",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2325,
                "name": "syntesizers",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20786,
            "forum_user": {
                "id": 20775,
                "user": 20786,
                "first_name": "Marco",
                "last_name": "Bidin",
                "avatar": "https://forum.ircam.fr/media/avatars/cv_pic.jpg",
                "avatar_url": "/media/cache/c8/12/c812194ab029dcbb2712b19a78eabf13.jpg",
                "biography": "Marco Bidin is a composer, artistic director, organist and harpsichord player from Italy.\n\nAfter his Organ degree in Italy, he studied Early Music performance in Trossingen and Contemporary Music performance in Stuttgart. Subsequently, under the guidance of Marco Stroppa, he completed the terminal degree (Konzertexamen) in Composition and the Certificate of Advanced Studies in Computer Music.\n\nMarco Bidin is active as an international composer and performer. He was invited in institutions like IRCAM (Paris, France), Shanghai Conservatory (China), Silpakorn University (Bangkok, Thailand) and Seoul National University (South Korea) among others.\n\nHe worked as a lecturer for Composition at the HMDK Stuttgart and as an organist for the Protestant Church in Stuttgart. 2010-2023 he was the artistic director of the italian-based NGO association ALEA. He is currently Associate Professor at the Electronic Instrument Engineering Department of the Xinghai Conservatory of Music in Guangzhou, China.",
                "date_modified": "2026-03-04T11:59:23.041276+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 988,
                        "forum_user": 20775,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    },
                    {
                        "id": 634,
                        "forum_user": 20775,
                        "date_start": "2023-11-16",
                        "date_end": "2024-11-16",
                        "type": 0,
                        "keys": [
                            {
                                "id": 155,
                                "membership": 634
                            },
                            {
                                "id": 406,
                                "membership": 634
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "mbalea",
            "first_name": "Marco",
            "last_name": "Bidin",
            "bookmarks": []
        },
        "slug": "xinghai-eie-studio-presentation",
        "pk": 3050,
        "published": true,
        "publish_date": "2024-10-22T15:04:37+02:00"
    },
    {
        "title": "Flûte électronique conception et prototypage - S. Conforti, E. Flety, M. Malt (IRCAM)",
        "description": "Présenté lors des Ateliers du Forum Ircam 2023 à Paris.",
        "content": "<p>&laquo; Est-il possible de pr&eacute;server les techniques de jeu, les gestes et l&rsquo;ergonomie instrumentale sp&eacute;cifiques lors de la conception d&rsquo;un instrument compl&egrave;tement &eacute;lectronique ? \"&nbsp;</p>\r\n<p><br />Le march&eacute; des instruments et contr&ocirc;leurs, de tout genre, et la recherche appliqu&eacute;e, dans le domaine de la fl&ucirc;te traversi&egrave;re, manquent d&rsquo;exemples qui auraient approfondi cette question.&nbsp;Cette recherche essaye de d&eacute;velopper un instrument qui, ayant comme base la fl&ucirc;te traversi&egrave;re, conservera toutes les caract&eacute;ristiques et les propri&eacute;t&eacute;s de l&rsquo;instrument acoustique, tout en &eacute;tant, un contr&ocirc;leur &eacute;lectronique. &raquo;</p>\r\n<p></p>\r\n<p>S. Conforti, E. Flety, M. Malt ( IRCAM)&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 17504,
            "forum_user": {
                "id": 17501,
                "user": 17504,
                "first_name": "Simone",
                "last_name": "Conforti",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/bd5643c4ddc3901d7416b5450d303925?s=120&d=retro",
                "biography": "Composer, computer music designer, sound designer and software developer, born in Winterthur, graduated in Flute and Electronic Music.\r\n\r\nComputer Music Designer professor at IRCAM and Co-founder and CTO of MUSICO. \r\n\r\nFormerly co-founder of MusicFit and MUSST, has worked for ArchitetturaSonora, and as researcher for the Basel University, the HEM Geneva, the HEMU in Lausanne and the MARTLab research center in Florence.\r\n\r\nSpecialised in interactive and multimedia arts, his work passes also through an intense activity of music oriented technology design, in this field he has developed many algorithms which ranges from sound spatialisation and space virtualisation to sound masking and to generative music.\r\n\r\n\r\nHe has been professor in Electroacoustic Composition and Computer Music at the Conservatoire of Cuneo and Florence and worked as computer music designer at CIMM of Venice Biennale.",
                "date_modified": "2026-02-22T12:42:43.061633+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 784,
                        "forum_user": 17501,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [
                            {
                                "id": 524,
                                "membership": 784
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "conforti",
            "first_name": "Simone",
            "last_name": "Conforti",
            "bookmarks": []
        },
        "slug": "flute-electronique-conception-et-prototypage-s-conforti-e-flety-m-malt-ircam",
        "pk": 2079,
        "published": true,
        "publish_date": "2023-02-24T11:34:26+01:00"
    },
    {
        "title": "Nouvel accordage théorie/pratique (réécriture de l'article du 18/01/2020)",
        "description": "Nouvel accordage théorie/pratique",
        "content": "<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1897cece1e7220bf852b8f380a65988c.png\" /></p>\r\n<p>La fonction rectiligne y=.0833333*x..... tout simplement un douzi&egrave;me, forme une s&eacute;rie ordonn&eacute;e de 12 rapports dont le deuxi&egrave;me cycle est un cycle de \"doublement\", c'est-&agrave;-dire 1.0833333-2. Appliqu&eacute;e &agrave; n'importe quelle valeur de fr&eacute;quence de d&eacute;part, elle produira une s&eacute;rie de fr&eacute;quences qui pr&eacute;servera certains des pr&eacute;cieux rapports de la s&eacute;rie harmonique tout en &eacute;tant capable d'alimenter des s&eacute;ries d'\"octaves\" cons&eacute;cutives, sans qu'une virgule ne se produise. La ligne verte est 12&radic;2/12*x (12TET)</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/thumbs/ab51403bc1a11975f4b869e011614d01.png/ab51403bc1a11975f4b869e011614d01-901x285.png\" /></p>\r\n<p>La logique est la suivante : 1/12 = .083333333 qui est en fait 1 de 12 donc =1 ; 2/12 = .16666666 qui est 2 de 12 donc = 2 qui prend en charge le cycle jusqu'&agrave; 12 ou 12/12. Le deuxi&egrave;me cycle commence maintenant &agrave; 13, d&eacute;riv&eacute; du produit 12*13/12 = 13 et 14 est d&eacute;riv&eacute; du produit 12*ratio 7/6, 15 du facteur 12*ratio 5/4 et ainsi de suite, qui sont les ratios de la deuxi&egrave;me p&eacute;riode de la fonction y=.0833333*x. Au point de \"doublement\", le cycle recommence.... donc 24*13/12=26, 24*7/6=28.</p>\r\n<p>La s&eacute;quence de rapports peut &ecirc;tre appliqu&eacute;e &agrave; n'importe quelle fr&eacute;quence comme point de d&eacute;part et faire l'objet d'un cycle continu selon la m&ecirc;me m&eacute;thode. Cette m&eacute;thode peut &eacute;galement &ecirc;tre extrapol&eacute;e pour fonctionner &agrave; partir de fonctions d&eacute;riv&eacute;es similaires de toutes les fractions s&eacute;rialis&eacute;es similaires et, dans le deuxi&egrave;me cycle de cette fonction, elle correspondra toujours &agrave; la p&eacute;riode de \"doublement\", ce qui permet d'obtenir des intervalles par p&eacute;riode de 1/2(3/2) et plus vers l'infini. Ces rapports coexisteront &eacute;galement &agrave; l'infini dans leurs s&eacute;quences respectives, par &eacute;tapes, c'est-&agrave;-dire que 12*1,66666666 (8e rapport du 2e cycle de la fonction 1/12) est identique &agrave; 15*1,333333333 (5e rapport du 2e cycle de la fonction 1/15). Toutes sortes de donn&eacute;es et de relations peuvent ainsi &ecirc;tre d&eacute;riv&eacute;es des r&eacute;sultats de ces fonctions.</p>\r\n<p></p>\r\n<h3>&Eacute;couter de jolies images</h3>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/thumbs/1484f189da359456964389ae568dd48d.png/1484f189da359456964389ae568dd48d-1399x183.png\" /></p>\r\n<p>144 est le dividende commun et la p&eacute;riode horizontale pour tous les &eacute;l&eacute;ments internalis&eacute;s dans les instances du syst&egrave;me r&eacute;apparaissant verticalement au niveau 13, puis &agrave; nouveau 26, etc. Par analogie, 0-144 sont les n&oelig;uds de la premi&egrave;re instance.</p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/thumbs/e766d7c2aeac106e323438ee02f32669.png/e766d7c2aeac106e323438ee02f32669-1341x608.png\" /></span></p>\r\n<h4>Notes suppl&eacute;mentaires :</h4>\r\n<p>12 cycles TET \"pr&eacute;matur&eacute;ment\" &agrave; 11,3265 environ.</p>\r\n<p>Les rapports pythagoriciens dans cette s&eacute;quence sont 1/1, 5/4, 4/3, 3/2, 5/3, 7/4, 11/6, 2/2 et se \"marient\" assez bien dans certains cas si on les \"mappe\" en arri&egrave;re d'un demi-ton. J'en conclus imm&eacute;diatement qu'il y a une dualit&eacute; avec le traitement math&eacute;matique du 0 par rapport au 1, m&eacute;lang&eacute; &agrave; la n&eacute;cessit&eacute; de trouver des ressources &agrave; partir d'une s&eacute;quence de rapports de \"doublage\"... Cependant, je serais heureux que quelqu'un disposant d'installations puisse essayer cet accord, car j'ai seulement essay&eacute; dans Logic Pro et certains intervalles ne sonnaient pas bien, et je ne comprends pas pourquoi..... Vous pouvez me contacter &agrave; l'adresse suivante : brock@brockmytton.com</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/72c9cede4525001a7c691ffd858a2626.png\" /></p>\r\n<p>RL= points sur la s&eacute;quence de doublement de la fonction rectiligne y=.0833333*x</p>\r\n<p>PO-P12= rapports pythagoriciens</p>\r\n<p>O = point (11,3265, 1) o&ugrave; 12&radic;2/12*x (12TET) effectue ses premiers cycles.</p>",
        "topics": [
            {
                "id": 283,
                "name": "Theory",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 191,
                "name": "Tuning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17661,
            "forum_user": {
                "id": 17657,
                "user": 17661,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7356ec9886128a3b915cfe90fc832be6?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-11-18T10:39:32.702791+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "flartec",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "new-tuning-theorypractice-revisited-from-1812020",
        "pk": 2641,
        "published": false,
        "publish_date": "2023-11-15T17:47:59+01:00"
    },
    {
        "title": "Pioneering AI Music, Binaural Beats, and Biofeedback for Health and Creativity by Chih-Fang Huang (Taiwan)",
        "description": "MuseTech Inc. is reshaping the future of sound by merging AI-driven music generation, Binaural Beats (BB), and biofeedback technology. Founded by Prof. Chih-Fang (Jeff) Huang, the company develops systems where music is not only composed by artificial intelligence but also adapted in real time to human physiology through HRV monitoring and bone-conduction headsets. This integration allows music to serve as both an artistic creation and a therapeutic tool—enhancing focus, relaxation, and emotional balance. By combining innovation in electroacoustic art, health technology, and immersive performance, MuseTech pioneers a new paradigm where music becomes personalized medicine for the mind and body.",
        "content": "<p></p>\r\n<h1>Pioneering AI Music, Binaural Beats, and Biofeedback for Health and Creativity</h1>\r\n<p><strong>By Prof. Chih-Fang (Jeff) Huang</strong></p>\r\n<hr />\r\n<h2>A New Frontier in Music and Technology</h2>\r\n<p>Music has always been more than entertainment: it is a language of emotion, memory, and healing. In the 21st century, with the rise of artificial intelligence and neurotechnology, music is entering a new era where it can be <strong>generated, adapted, and personalized in real time</strong>. <em>MuseTech Inc.</em>, founded in Taiwan by composer and researcher Prof. Chih-Fang (Jeff) Huang, is at the forefront of this transformation.</p>\r\n<hr />\r\n<h2>AI Music Generation</h2>\r\n<p>MuseTech&rsquo;s platform allows users to <strong>hum a melody, choose a style or mood, and instantly receive a full musical composition</strong> generated by AI. Unlike traditional music software, this system combines <strong>machine learning with human input</strong>, producing orchestral scores, electroacoustic soundscapes, or pop-inspired arrangements that remain musically coherent and emotionally expressive. For educators, students, and professionals alike, the platform turns AI into a <strong>creative partner</strong> rather than just a tool.</p>\r\n<hr />\r\n<h2>Binaural Beats and Biofeedback</h2>\r\n<p>Where MuseTech truly innovates is in <strong>music for health and therapy</strong>. Its research integrates <strong>Binaural Beats (BB)</strong>&mdash;a psychoacoustic technique using slightly different frequencies in each ear to influence brainwave states&mdash;with <strong>Heart Rate Variability (HRV) monitoring</strong>. Through specially designed <strong>bone-conduction headsets</strong>, the system reads physiological signals and adjusts BB rhythms, textures, and harmonic layers in real time. The result is music that not only responds to the listener&rsquo;s emotions but also <strong>guides the body toward relaxation, focus, or restorative balance</strong>.</p>\r\n<hr />\r\n<h2>Immersive Concerts and Cross-Industry Partnerships</h2>\r\n<p>Beyond personal wellness, MuseTech explores large-scale artistic applications. Its <strong>immersive concerts</strong> combine live orchestra, electroacoustic music, BB sound layers, and synchronized light to create transformative experiences. In parallel, MuseTech collaborates with hospitals such as <strong>Taoyuan Veterans Hospital</strong> for clinical validation, and with industry leaders including <strong>AUO (VR displays), Quanta, and Compal</strong> for hardware development. Potential applications even extend to <strong>aerospace and defense</strong>, where adaptive sound systems could help pilots and operators maintain focus under stress.</p>\r\n<hr />\r\n<h2>Music as Medicine</h2>\r\n<p>MuseTech&rsquo;s vision is bold yet clear: to redefine music as a <strong>personalized medicine for the mind and body</strong>. By fusing <strong>AI creativity, BB-induced neuro-modulation, and biofeedback-driven interaction</strong>, the company stands at the convergence of art, science, and healthcare. For Prof. Huang, this is not just a technological achievement but a cultural mission:</p>\r\n<blockquote>\r\n<p>&ldquo;Music has always shaped human life. With AI and biofeedback, we now have the ability to tailor music as a therapeutic agent&mdash;one that heals, inspires, and connects.&rdquo;</p>\r\n</blockquote>",
        "topics": [
            {
                "id": 3462,
                "name": "AI & Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3479,
                "name": "Binaural Beats (BB)",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 565,
                "name": "Biofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3481,
                "name": "Bone-Conduction Headset",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3480,
                "name": "Heart Rate Variability (HRV)",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3482,
                "name": "Music and Sound Therapy",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3484,
                "name": "Music as Medicine",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3483,
                "name": "Neurotechnology",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 8927,
            "forum_user": {
                "id": 8924,
                "user": 8927,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/8bf07eef77bf44a979f87c65bdb06fb3?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-09T11:45:08.383687+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jeffh",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "pioneering-ai-music-binaural-beats-and-biofeedback-for-health-and-creativity",
        "pk": 3753,
        "published": true,
        "publish_date": "2025-10-03T10:43:49+02:00"
    },
    {
        "title": "Intuition and Rationality: Spatial Composition with Voices, Instruments and Live Electronics after Baruch Spinoza by Daniel Peter Biro",
        "description": "From 2021-2025, I completed a series of compositions at the SWR Experimentalstudio as part of the artistic research project Sounding Philosophy, supported by the Norwegian Artistic Research Program. These compositions incorporate a new way of presenting timbre in space. In my talk, I will describe this ongoing research and how it relates to larger questions of spatial perception, musical meaning and its connection to the philosophy of Baruch Spinoza (1632-1677).",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p>From 2021-2025, I worked on the compositions<em>&nbsp;</em>De Natura et Origine, written for Kai Wessel, the Ensemble Mixtura and the SWR Experimentalstudio, and&nbsp;<em>Asher Hotseti Etkhem (Who Brought You Out Of The Land),</em>&nbsp;written for the Neue Vocalsolisten and the SWR Experimentalstudio. These compositions, based on texts from the&nbsp;<em>Ethics&nbsp;</em>of Baruch Spinoza, incorporate a new way of presenting vocal music in space, incorporating research into the combination of pitch-tracking, timbre-tracking and spatialization with convolution in a live-electronic setup.</p>\r\n<p>A description of this research, done in conjunction with team members of the SWR Experimentalstudio, and resulting compositions can be seen here:</p>\r\n<p><a href=\"https://www.youtube.com/watch?v=QnxWMFwDF88\">https://www.youtube.com/watch?v=QnxWMFwDF88</a></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=coeC31Xas-A\">https://www.youtube.com/watch?v=coeC31Xas-A</a></p>\r\n<p><a href=\"https://youtu.be/y-dqSzDs--s\" title=\"De Natura et Origine\">https://youtu.be/y-dqSzDs--s</a></p>\r\n<p>In my talk, I will describe this ongoing research and how it connects to larger questions of spatial perception and musical meaning, the compositions presenting musical analogies to Spinoza&rsquo;s theories of cognition, intuition and rationality. I will also show how the compositions, which integrate elements of Jewish, Muslim and Christian recitation practices, have been informed by computational ethnomusicology frameworks, done in conjunction with Peter van Kranenburg at Utrecht University. In addition, I will give a glimpse into a new work <em>Katuv Basefer (Inscribed in the Book), a&nbsp;</em>new composition written for Les M&eacute;taboles and the SWR Experimentalstudio, supported by the Ernst von Siemens Music Foundation.</p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 3225,
                "name": "Baruch Spinoza",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3224,
                "name": "Dániel Péter Biró",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3229,
                "name": "Ensemble Mixtura",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3228,
                "name": "Kai Wessel",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3227,
                "name": "Neue Vocalsolisten",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3230,
                "name": "Norwegian Artistic Research Program",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3226,
                "name": "SWR Experimentalstudio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 199,
            "forum_user": {
                "id": 199,
                "user": 199,
                "first_name": "Dániel Péter",
                "last_name": "Biró",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9a49da253cecceb7f4ee5ca36c2ae1a6?s=120&d=retro",
                "biography": "Dániel Péter Biró is Professor for Composition at the Grieg Academy in Bergen, Norway. He studied in Hungary, Germany, Austria, Switzerland and Israel received his Ph.D. from Princeton University in 2004. From 2004 -2009 he was Assistant Professor and from 2009-2018 Associate Professor for Composition and Music Theory at the University of Victoria in Victoria, BC, Canada. In 2010 he received the Gigahertz Production Prize from the ZKM-Center for Art and Media. In 2011 he was Visiting Professor at Utrecht University and in 2014-2015 Research Fellow at the Radcliffe Institute for Advanced Study, Harvard University. In 2015 he was elected to the College of New Scholars, Scientists and Artists of the Royal Society of Canada. In 2017 he was awarded a Guggenheim Fellowship. Dániel Péter Biró has been commissioned by prominent musicians, ensembles and festivals and his compositions are performed around the world. He is currently leading the project Sounding Philosophy (2021-2025), supported by the Norwegian Artistic Research Program.",
                "date_modified": "2026-03-01T15:46:30.208623+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 835,
                        "forum_user": 199,
                        "date_start": "2012-11-22",
                        "date_end": "2025-05-11",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "DanielPeterBIRO",
            "first_name": "Dániel Péter",
            "last_name": "Biró",
            "bookmarks": []
        },
        "slug": "intuition-and-rationality-spatial-composition-with-voices-instruments-and-live-electronics-after-baruch-spinoza",
        "pk": 3592,
        "published": true,
        "publish_date": "2025-07-31T13:49:38+02:00"
    },
    {
        "title": "AI Blog",
        "description": "Explore AI tools for automated blogging and content writing in one directory.",
        "content": "<p>The All AI Blog Generator Directory is your go-to guide for discovering AI writing tools. It features a curated selection of platforms designed for blog and text generation. Users can compare tools that range from fully automated systems to writing assistants. The directory simplifies the process of choosing the right AI solution. It is perfect for those looking to enhance website content. You can quickly understand features, benefits, and differences between tools. The platform saves time by organizing everything in one place. It is suitable for beginners and experienced creators alike. Whether you want automation or support, you&rsquo;ll find useful options here. For questions or feedback, contact @johnrushx on Twitter.</p>\n<p>Website: <a href=\"https://aibloggenerators.com/\">https://aibloggenerators.com/</a></p>\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4524,
                "name": "blog",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4525,
                "name": "writing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166240,
            "forum_user": {
                "id": 166004,
                "user": 166240,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4142bcbf9e9642b332c6a70eaddb1690?s=120&d=retro",
                "biography": "The All AI Blog Generator Directory helps you navigate the growing world of AI writing tools with ease. It gathers a wide range of blog and text generators in one place. You can explore solutions that fully automate content creation or assist in drafting ideas. The platform provides comparisons to help you choose the most suitable option. It is especially useful for improving website content efficiently. Users can learn about different tools without spending hours researching. The directory is structured to be beginner-friendly and informative. It supports both casual users and professionals. Whether you're optimizing content or experimenting with AI, it’s a valuable resource. Reach out on Twitter: @johnrushx.",
                "date_modified": "2026-03-31T09:46:53.002657+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "aiblog",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ai-blog",
        "pk": 4560,
        "published": false,
        "publish_date": "2026-03-31T09:48:47.129263+02:00"
    },
    {
        "title": "Moving Sound Pictures : Hommage, an immersive VR experience by Konstantina Orlandatou",
        "description": "Moving Sound Pictures is a project in which users can interactively explore paintings by famous and contemporary visual artists through playful actions\r\nusing VR technology. \r\n“Hommage” is an interactive VR installation in which\r\nthree artworks have been adapted to a virtual three-dimensional environment. Explore Dali’s living room inspired by Mae West’s face, interact with Matisse’s\r\nspray of leaves, and play with Picasso’s mandolin and guitar. \r\n“Hommage” is a tribute to friendship, common respect, and admiration between artists, who are milestones in art history of the 20th century.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<h1>Moving Sound Pictures:<span>&nbsp;</span><em>Hommage</em></h1>\r\n<h1>Content creation for art mediation through &nbsp;</h1>\r\n<h1>VR technologies</h1>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"><img src=\"https://forum.ircam.fr/media/uploads/hommage_gallery2.png\" alt=\"\" width=\"784\" height=\"438\" /><span>&nbsp;&nbsp;</span><img src=\"https://forum.ircam.fr/media/uploads/hommage_gallery.png\" alt=\"\" width=\"788\" height=\"440\" /></p>\r\n<p><em></em></p>\r\n<p><em>Presented by : Konstantina Orlandatou</em></p>\r\n<p><a href=\"https://forum.ircam.fr/profile/dinaorla/\" target=\"_blank\">Biography</a>&nbsp;</p>\r\n<p></p>\r\n<p><em>Abstract</em></p>\r\n<p><em><br />Hommage is an interactive VR installation in which three artworks have been adapted to a virtual three-dimensional environment. Explore Dali&rsquo;s living room inspired by Mae West&rsquo;s face, interact with Matisse&rsquo;s spray of leaves, and play with Picasso&rsquo;s mandolin and guitar. &ldquo;Hommage&rdquo; is a tribute to friendship, common respect, and admiration between artists, who are milestones in art history of the 20th century. Users are invited to explore artworks in a 3D space by interacting with the objects of these artworks. Through interaction music emerges and the artworks become musical instruments for the user. The installation is part of the Moving Sound Pictures project which mission is the usage of VR technologies for art mediation and transfer of knowledge.</em></p>\r\n<h2>INTRODUCTION</h2>\r\n<p>In the past years the development of software and hardware in the area of XR technologies has rapidly blossomed opening incredible possibilities in different fields. Especially VR has become a permanent feature for immersive learning experiences providing, in many fields, the possibility of recreating real life settings and simulations of work challenges. This has been particularly beneficial in areas that deal with purpose-created scenarios and crisis management, hands-on experiences, skills analysis and decision making, and remote training. Why not then to use VR technologies for art mediation and education?<br />From the perspective of a classical composer time is an inevitable element of music. In visual arts time doesn&rsquo;t exist. The painter doesn&rsquo;t tell a story in a strict time line but rather catches a glimpse of a moment in a frame. What would it be if paintings become alive and come out of their two-dimensional space? What if Kandinsky could make music with his circles, lines and triangles? What if Mondrian could come out of the canvas&rsquo; frame?<br />All these questions are dealt in the project<span>&nbsp;</span><em><a href=\"https://ligetizentrum.hfmt-hamburg.de/index.php/movingsoundpictures/\">Moving Sound Pictures</a><span>&nbsp;</span></em>[1]; a project in which users can interactively explore paintings by famous and contemporary visual artists through playful actions using VR technologies. Paintings are transformed into three-dimensional spaces in which the user has the opportunity to touch, move or enlarge the objects in the painting and to create music through these actions. Additionally, the user gets information about the artwork and its history. While the user interacts with the VR environment, the transfer of knowledge takes place on this virtual stage at the same time. Furthermore the user has the opportunity to discover the artwork from another perspective in a unique immersive experience. These imaginary worlds are based on how I see the artworks and try to add my artistic interpretation creating a connection between visual arts and music through (visual) story-telling.&nbsp;<br />In a previous VR environment, entitled<span>&nbsp;</span><em>The Abstract Painters<span>&nbsp;</span></em>[2], four paintings of the Abstract Era between 1916 - 1924 by Kasimir Malevich, El Lissitzky, Piet Mondrian and Wassily Kandinsky were transferred into a VR environment and dressed up musically. The user enters a gallery designed in the style of the 1920s, with a wallpaper in wine red color and a shiny brown wooden floor. The paintings hang on the walls like in an exhibition and act as gateways into the walk-in virtual rooms: in the VR version of Malevich&rsquo;s painting<span>&nbsp;</span><em>Suprematism</em><span>&nbsp;</span>you can make the objects sound with two mallets, in the VR version of Kandinsky&rsquo;s<span>&nbsp;</span><em>Merry Structure</em><span>&nbsp;</span>you can touch the objects with your hands and thus create music. In the main room of this virtual gallery the user can get &ndash; through voice-over in Englisch, German or Chinese language &ndash; information about the painting and the painter.<br />Following the same paradigm, the interactive VR environment<span>&nbsp;</span><em>Hommage<span>&nbsp;</span></em>hosts three artworks from painters who are considered to be milestones of the 20<sup>th</sup><span>&nbsp;</span>century and more specifically:<span>&nbsp;</span><em>The Face of May West</em><span>&nbsp;</span>(1934-34) by Salvador Dali,<span>&nbsp;</span><em>Spray of Leaves</em><span>&nbsp;</span>(1953) by Henri Matisse and<span>&nbsp;</span><em>Mandolin and Guitar</em><span>&nbsp;</span>(1924) by Pablo Picasso. &nbsp;<em>Hommage</em><span>&nbsp;</span>is literally a tribute to friendship, common respect, and admiration between these artists.</p>\r\n<h2>Content Creation &amp; Development</h2>\r\n<ul>\r\n<li>\r\n<h3>Creative content</h3>\r\n</li>\r\n</ul>\r\n<p>Reading art history books many would notice that throughout the years many relationships (friendships or rivalries) have been established between famous artists. Salvador Dali was fascinated by the actress and sex icon Mae West. Henri Matisse felt at the beginning threatened by the younger painter Pablo Picasso but as time passed a creative rivalry developed into a respectful artistic relationship between them. &nbsp;What would it be, if their paintings could be combined in one interactive space?<br />A big three-dimensional space (approximately 100m<sup>2</sup>) of an interesting architecture serves as an apartment. The starting point is a corridor where photos of the paintings and their titles are hanging on the walls. From there the user has a view to the living room which is a 3D reproduction of Dali&rsquo;s<span>&nbsp;</span><em>The face of Mae West</em>. The corridor leads to the main large room where the artworks - now transformed as three-dimensional reconstructions - are placed as parts of an apartment. However these parts are not separated from each other through wall but rather the one artwork overlaps with the other.<br />There are three (3) different areas, one for each artwork, where different interactions can happen. Dali&rsquo;s<span>&nbsp;</span><em>The Face of Mae West<span>&nbsp;</span></em>is set to middle of the room, representing a living room, where the user can &ldquo;touch&rdquo; the sofa or play percussion with the clock positioned on the top of a furniture, which resembles a nose. &nbsp;Matisses&rsquo;s oversized leaves have been positioned on the right side of the room. Here the colored leaves have been placed on the floor and by going through the leaves, the leaves start twisting and swinging producing different alternating sounds. On the left side of the room, one can take in his/her hands and play with Picasso&rsquo;s dissonant<span>&nbsp;</span><em>Mandolin and Guitar</em>.<br />The music and the sound designed have been inspired by the colors and the size of the objects based on the knowledge gained by cross modal correspondences and sound-color synaesthesia. Among others brighter colors are connected to darker sounds or big objects are assigned to low-pitched sounds.</p>\r\n<ul>\r\n<li>\r\n<h3>Technical development</h3>\r\n</li>\r\n</ul>\r\n<p>When developing an interactive VR installation there are many aspects one should consider. Since the project serves the transfer of knowledge for a wide audience (ideally in a museum where the analog meets the digital&nbsp; artwork accessible to audiences of any age) the environments have to be designed in such a way that they are appropriate for small children up to elderly people or people with handicap &ndash; so all visitors of a museum.<br />Through storytelling and visual cues<span>&nbsp;</span><em>Hommage</em><span>&nbsp;</span>has been intuitively designed. Grabbable objects are placed in a middle height, so to be reachable by small children, and divers type of interaction is possible by pressing one button or trigger of a controller at a time.<br />For the installation, visuals and graphics play a very important role for the perception of the experience. Therefore the whole virtual environment, graphics, lightening and interactions have been developed using Unreal Engine. Raw material for the music and the sound design has been composed and produced outside the engine (using recordings, Cubase or Max/MSP etc), however sound implementation is developed using FMOD and the FMOD plug-in for the engine.<br />The environment/installation is walkable and can be explored through a VR headset/toolkit. It is developed as a PCVR version and appropriate both for MetaQuest 3 or HTC Vive Pro 2/Focus headsets.</p>\r\n<h2>CONCLUSION</h2>\r\n<p><em>Hommage</em><span>&nbsp;</span>is a VR installation dedicated to visual artists who have contributed in an exceptional and unique way. With the usage of VR technology artworks are presented in a total different way combining music and interaction to mediate art from another perspective. Innovative content VR creation and technology can lead to a unique individualistic immersive arts experience for museums and institutions.</p>",
        "topics": [
            {
                "id": 2542,
                "name": "art mediation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 115,
                "name": "Music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2545,
                "name": "sound design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2546,
                "name": "visual arts",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1108,
                "name": "VR",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2544,
                "name": "xr tech",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 38623,
            "forum_user": {
                "id": 38572,
                "user": 38623,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/VRFoto.jpg",
                "avatar_url": "/media/cache/56/42/564274169fd95977b626b13220767528.jpg",
                "biography": "Konstantina Orlandatou studied composition, music theory, piano and accordion in the\nConservatory of Athens (Greece) and multimedia composition (M.A.) at the University of Music\nand Drama in Hamburg. In 2014 she completed her doctoral dissertation with the title\n“Synaesthetic and intermodal audio-visual perception: an experimental research” in the\nUniversity of Hamburg (Department of Systematic Musicology).\nSince 2018 she is leading the project “Moving Sound Pictures”, in which she uses VR technologies as an interface for art mediation between music and the visual arts, and since 2023 she has been head of the XR Laboratory at the ligeti center.",
                "date_modified": "2025-03-31T09:32:36.680717+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1058,
                        "forum_user": 38572,
                        "date_start": "2025-01-20",
                        "date_end": "2026-01-20",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "dinaorla",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "moving-sound-pictures-hommage-an-immersive-vr-experience",
        "pk": 3250,
        "published": true,
        "publish_date": "2025-02-20T16:59:06+01:00"
    },
    {
        "title": "Spatial Audio Pedagogies: Designing Across Studio, Culture, and Space by Rodrigo Meirelles & Daniel Ocanto",
        "description": "This demo-based presentation explores spatial audio as both a creative practice and a pedagogical method. Drawing from faculty-led and student-produced projects developed at Arizona State University’s Media and Immersive eXperience (MIX) Center, the session demonstrates how immersive sound workflows evolve from studio-based design to large-scale installations and live performance.\r\n\r\nThrough listening examples, live spatial manipulation, and student reflections, the presentation frames spatial audio as a perceptual storytelling tool that bridges technology, cultural inquiry, and interdisciplinary collaboration.",
        "content": "<div><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></div>\r\n<div><strong><strong></strong></strong></div>\r\n<div><span>Spatial audio is often approached as a technical specialization. This presentation instead proposes it as a pedagogical framework: a way of thinking, listening, and designing across studio practices, cultural narratives, and physical space.</span></div>\r\n<div>&nbsp;</div>\r\n<div><span>Developed over a 1.5-year period at Arizona State University&rsquo;s Media and Immersive eXperience (MIX) Center, this project brings together faculty research and student work created for the Enhanced Immersion Studio (EIS), a large-scale 55-speaker Meyer Sound Spacemap Go NADIA&nbsp;environment designed for immersive media, performance, and experimentation. The session shares how spatial audio is taught not as a fixed workflow, but as a transferable design literacy that adapts to different formats, technologies, and creative contexts.</span></div>\r\n<div>&nbsp;</div>\r\n<div><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/bb4efa6c183b55936888a67dd49a4b3f.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></div>\r\n<div>&nbsp;</div>\r\n<div><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b47542252bbde58f461756670e79dadd.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></div>\r\n<div>&nbsp;</div>\r\n<div><span>The presentation features excerpts from faculty-led works and student projects spanning sound installations, live performance, and cross-disciplinary collaborations involving voice, video, text, and show control systems. Rather than emphasizing technical polish, these case studies foreground process: how students from diverse backgrounds learn to think spatially through collaborative, hands-on experimentation.</span></div>\r\n<div>&nbsp;</div>\r\n<div>\r\n<div><span>Inspired by critical and experiential pedagogical models, this approach treats students as active authors of meaning rather than passive users of tools. Spatial decisions become narrative, cultural, and perceptual choices: how sound moves, where it resides, and how it relates to memory, environment, and storytelling.</span></div>\r\n<div>&nbsp;</div>\r\n<div><span>The session combines focused listening, system walkthroughs, live spatial manipulation, and short student testimonies reflecting on their creative process. By comparing different spatial contexts&mdash;large-scale installations, studio playback, and binaural renderings&mdash;the presentation highlights how spatial thinking persists across technical constraints.</span></div>\r\n</div>\r\n<div><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6f8ed0065e77e5c225407a657d81ca31.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></div>\r\n<div><span>Ultimately, this demo argues for spatial audio as a shared language across disciplines, one that supports creative autonomy, collective authorship, and new forms of immersive storytelling beyond the laboratory or the classroom.</span></div>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 37,
                "name": "Pedagogy",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 85355,
            "forum_user": {
                "id": 85254,
                "user": 85355,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b67e1dc400be5e20a0fd1ef1ec598178?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-19T22:41:56.250254+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "daosounds",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "spatial-audio-pedagogies-designing-across-studio-culture-and-space-by-rodrigo-meirelles-daniel-ocanto",
        "pk": 4390,
        "published": true,
        "publish_date": "2026-02-18T23:04:20+01:00"
    },
    {
        "title": "Cycling 74 - Emmanuel Jourdan, David Zicarelli",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>This presentation will be an opportunity to show some of the work in progress.</p>",
        "topics": [],
        "user": {
            "pk": 67,
            "forum_user": {
                "id": 67,
                "user": 67,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7756331c6a0abd9247953ec7b06b8527?s=120&d=retro",
                "biography": "Emmanuel est consultant pour Cycling ’74 depuis 2006, où il collabore au développement de Max, MSP et Max for Live.",
                "date_modified": "2026-01-21T19:05:38.183321+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "emmanueljourdan",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "cycling-74-emmanuel-jourdan-david-zicarelli",
        "pk": 2134,
        "published": true,
        "publish_date": "2023-03-14T12:31:25+01:00"
    },
    {
        "title": "GranShaper synth. New strategies of granular synthesis and derivative types of sound synthesis: granular ‘vocoder’, ‘granular waveshaping’ and ‘granular shape-morphing’ by Nikolai Khrust",
        "description": "GranShaper project introduces a new method of sound synthesis, combining the principles of granular synthesis and waveshaping, together creating a range of synergistic effects, likely previously unknown. In particular, these include a ‘granular vocoder’ and ‘waveshape-morphing’, which means a smooth timbral transition between two sounds through waveshaping.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p>&nbsp;</p>\r\n<p>Despite the fact that a lot has already been said about granular synthesis, and&nbsp;the&nbsp;number of granular synthesisers may be counted in tens&sup1;, in our opinion the&nbsp;potential of&nbsp;this type of synthesis is far from being exhausted. And while combining it with waveshaping with a certain design one can obtain some new sound results which could not appear in using just one of that kinds of sound synthesis.</p>\r\n<p>&nbsp;</p>\r\n<h1><strong>1. Granular synthesis or &lsquo;granulation&rsquo; ?</strong></h1>\r\n<p>The idea of grains as small parts of sound signals was first proposed by Dennis Gabor&sup2; (G&aacute;bor D&eacute;nes), while the&nbsp;full theory and&nbsp;practice of&nbsp;<strong>granular synthesis</strong>&nbsp;was developed by&nbsp;Iannis Xenakis&sup3;. Later, Curtis Roads implemented its computer realisation⁴; Barry Truax further advanced the method, including its real-time realisation⁵.</p>\r\n<p>Xenakis claimed in his <em>lemma</em>: &lsquo;All sound is an integration of grains, of elementary sonic particles, of sonic quanta. Each of these elementary grains has a threefold nature: <em>duration</em>, <em>frequency</em>, and&nbsp;<em>intensity</em>&rsquo;⁶. <strong>Today&rsquo;s granular synthesis</strong> typically uses already existing sound to&nbsp;process, taking a&nbsp;small part of it as&nbsp;a&nbsp;grain and repeating it very fast, flexibly varying the&nbsp;characteristics of&nbsp;the&nbsp;repetition. To reduce grain density one can insert gaps between adjacent grains (see Section 7 for details); one of the solutions to&nbsp;make grain overlaps is using many &lsquo;voices&rsquo; of grain repetitions.&nbsp;</p>\r\n<p>So where is the border between synthesis and&nbsp;&lsquo;granulation&rsquo; or just &lsquo;processing&rsquo;? Since which point the <strong>granular method really synthesise new sound</strong> instead of just montaging the existing one? In our view, a synthesis starts when we no&nbsp;more hear the&nbsp;source sound, but the new sound with new parameters is generated (and that means that for us most of commercial granulators are not synthesisers, but rather &lsquo;texturisers&rsquo;).&nbsp;</p>\r\n<p>Since we&rsquo;re using a pre-recorded sound with already existing frequency and&nbsp;spectrum (and setting aside issues of intensity for a while) we need to rework our approach to Xenakis&rsquo; lemma three parameters. Our <strong>GranShaper</strong> project takes following three main parameters for granular synthesis:</p>\r\n<ul>\r\n<li>frequency of repetition of the grain [𝑓, Hz];</li>\r\n<li>playing rate&nbsp;&mdash; does the grain is being played back at original rate 1:1 or faster/slower? [𝑟, times];</li>\r\n<li>grain time position (time offset) in the source sound [𝜏₀, seconds].</li>\r\n</ul>\r\n<p>One cannot control the source sound&rsquo;s grain length (or duration: 𝑙) as it completely depends on and calculates from the first two parameters:&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝑙&ensp;=&ensp;𝑟 : 𝑓 &nbsp; &nbsp; &nbsp;&nbsp; (1)</p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p>On lower repetition frequencies we perceive a repetition frequency as a tempo of&nbsp;an ostinato; we hear the pitch of an original sound and a playing rate transposes it. We recognise the source sound. But if the frequency is above 20 Hz we no more hear the&nbsp;source; frequency becomes new pitch, and original pitch altered by playing rate becomes formant of the new sound (see Fig.&nbsp;1). This is a moment when the <strong>real synthesis starts</strong>.</p>\r\n<p></p>\r\n<p>&nbsp;<img alt=\" Fig. 1. Changing perceived sound parameters depending on repetition frequency\" src=\"https://forum.ircam.fr/media/uploads/user/b7f13922d7bb4d813d4aad02864184ae.png\" width=\"1422\" height=\"1064\" /></p>\r\n<p style=\"text-align: center;\">&nbsp; Fig. 1. Changing perceived sound parameters depending on repetition frequency</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p>Probably, the first brilliant example of such quality changing while crossing the&nbsp;border of 20 Hz is given in <em>Kontakte</em> (1958&mdash;1960) by Karlheinz Stockhausen.</p>\r\n<p></p>\r\n<p><img alt=\"Fig. 2. Karlheinz Stockhausen. Kontakte. Section X. L.: Universal Edition, 1966. P. 19&amp;ndash;20\" src=\"https://forum.ircam.fr/media/uploads/user/a94ed4719d89e079e24352b1f76cabf5.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"1401\" height=\"359\" /></p>\r\n<p style=\"text-align: center;\">Fig. 2. Karlheinz Stockhausen. Kontakte. Section X. L.: Universal Edition, 1966. P. 19&ndash;20</p>\r\n<p>&nbsp;</p>\r\n<p>Usually, this work is not counted as a granular synthesis example because there is neither pre-recorded sound nor chaotisation used, which is commonly considered characteristic of granular synthesis. But actually, Stockhausen used a granular technique here: a very short sound which was created before is being repeated very quickly, and its repetition frequency forms the new pitch. It makes vast downward glissando, and&nbsp;the&nbsp;first vertical dashed line in the score (see Fig. 2) shows the point where this slide <strong>crosses the border of&nbsp;20 Hz</strong>, moving us from <em>sound synthesis</em> to <em>rhythmisation</em> or from the <em>pitch area</em> to&nbsp;the&nbsp;<em>rhythm area </em>according to&nbsp;Stockhausen&rsquo;s terms⁷. After crossing this threshold we realise that this low sound with&nbsp;rich spectrum actually is a&nbsp;sequence of&nbsp;very short sine pieces with high pitch.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><strong>2. Granular &lsquo;vocoder&rsquo;</strong></h1>\r\n<p>Above we touched on the role of parameters Nos. 1 (repetition frequency) and 2 (playing rate), but said nothing about 3rd parameter, the time position of the grain. Perhaps, the most interesting feature of granular synthesis is the dynamic aspect of its parameters: diverse strategies of continuous changing the values results in vast variety of complex sounds, which was primarily demonstrated in the first &lsquo;official&rsquo; precedent of&nbsp;granular synthesis, <em>Analogique B</em> (1958&mdash;1959) by Iannis Xenakis⁸. When all parameters are enough stable (the values changes very slowly) and 𝑓 &gt; 20 Hz, the resulting signal is close to be periodical and the sound is close to a tone. In GranShaper one can play melodies and&nbsp;chords by such tones (particularly using a MIDI keyboard). But when values are in dramatic changing, the&nbsp;sound is no more periodical, so we obtain different kinds of noises and transitional sounds.&nbsp;</p>\r\n<p>Particularly, while our time position (𝜏₀) is grading continuously, we can hear different parts of original sound in granularly transformed way. When 𝜏₀ increases from&nbsp;the beginning to the end with a speed close to normal playback speed, the&nbsp;obtained sound most closely resembles the source sound because the process of its playing mostly reminds normal playback. The difference is that instead of just playing it from the&nbsp;beginning to the end (how denoted in Fig.&nbsp;3)&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Fig. 3. Simple playback (time axis)\" src=\"https://forum.ircam.fr/media/uploads/user/444c468ce4132e42924f73080bbf77cd.png\" width=\"1611\" height=\"153\" /></p>\r\n<p style=\"text-align: center;\">Fig. 3. Simple playback (time axis)</p>\r\n<p>&nbsp;</p>\r\n<p>we play it zigzag way by small pieces and every next piece (grain) starts from the later time point 𝜏₀ (Fig.&nbsp;4).</p>\r\n<p></p>\r\n<p>&nbsp;<img alt=\"Fig. 4. Granular &amp;lsquo;playback&amp;rsquo; with continuously increasing grain starting position 𝜏₀\" src=\"https://forum.ircam.fr/media/uploads/user/ba8933472bee3c5c7e58289b84193e4e.png\" width=\"1445\" height=\"1147\" /></p>\r\n<p style=\"text-align: center;\">Fig. 4. Granular &lsquo;playback&rsquo; with continuously increasing grain starting position 𝜏₀</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;The resulting sound is still a tone as it has a structure which is very close to&nbsp;periodical: every grain is its period since the neighbour grains are very similar (as far they started from not the same but very similar time point from the source sound)&mdash;thus their sequence forms quasiperiodical signal. On&nbsp;the other hand the result strongly resembles the source sound as we &lsquo;scan&rsquo; it from the beginning to the end which reminds simple playback.&nbsp;</p>\r\n<p>So the sound we obtain is ambivalent and possesses qualities of both source sound and&nbsp;new tone. We call this effect <strong>granular &lsquo;vocoder&rsquo;</strong>. Indeed this is not real vocoder as&nbsp;it doesn&rsquo;t use any frequency domain technique. It sounds differently, but sometimes reminds vocoder processing. One can even use it in typical vocoder situation if loads some speech example as a source sound, takes a chord and slides &lsquo;starting position&rsquo; (𝜏₀) slider slowly from the beginning to the end. The chord will start &lsquo;speaking&rsquo;.</p>\r\n<p>Interesting feature of this technique is that we can modulate from the tone (or&nbsp;chord) to the original sound by altering the speed of changing 𝜏₀: where this speed is exactly equal to normal playback speed, the end of the previous grain is exactly the same moment (in the source sound) as the beginning of the next one. So, ideally it &lsquo;degenerates&rsquo; into just normal playback (Fig.&nbsp;5).</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Fig. 5. Granular playback &amp;lsquo;degenerates&amp;rsquo; into normal playback when 𝜏₀ is exactly equal to&nbsp;normal playback speed.\" src=\"https://forum.ircam.fr/media/uploads/user/ac75d34fff8de0f75b46d4d2805491c1.png\" width=\"1417\" height=\"137\" /></p>\r\n<p style=\"text-align: center;\">Fig. 5. Granular playback &lsquo;degenerates&rsquo; into normal playback when 𝜏₀ is exactly equal to&nbsp;normal playback speed.</p>\r\n<p>&nbsp;</p>\r\n<p>That said, we should mention that this is true only when the playing rate&nbsp;(𝑟) is normal 1:1. Then we can generalise our statement: where the speed of 𝜏₀ changing is exactly equal to&nbsp;𝑟, the end of the previous grain is exactly the same moment (in&nbsp;the&nbsp;source sound) as the beginning of the next one. So, ideally it &lsquo;degenerates&rsquo; into&nbsp;just normal playback with 𝑟 rate (when 𝑟 is not 1, the source sound is transposed and&nbsp;stretched/shrinked).</p>\r\n<p>However when the speed of 𝜏₀ changing is very slow, we hear the tone (or chord) but not the source sound. In the case when the speed of 𝜏₀ changing is less that 𝑟 but&nbsp;doesn&rsquo;t approach to 0, we hear the &lsquo;vocoder&rsquo; effect, because the tone is &lsquo;coloured&rsquo; by&nbsp;the source sound. And when the speed of 𝜏₀ changing is much more than 𝑟 or quite less than 0 (so we move back) we hear quite different complex sound. So by modulating that speed from 0 to 𝑟 and further we can perform a transition between the new tone (or&nbsp;chord) to&nbsp;the source and, further, to other kinds of sound.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><strong>3. Granshaping</strong></h1>\r\n<p><strong>&lsquo;Granular waveshaping&rsquo;</strong> is a kind of hybrid of the two types of synthesis mentioned in its name.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>3.1. Two perspectives on waveshaping.</strong></h3>\r\n<p>Generally, waveshaping is a non-linear interaction of two signals. This kind of synthesis was discovered by Daniel Arfib⁹ and&nbsp;Marc Le&nbsp;Brun&sup1;⁰ in 1978&ndash;1979.</p>\r\n<p>In common DAW context, waveshaping is understood as a kind of sound processing which remaps the&nbsp;<em>values</em> of the input signal according to a second signal as&nbsp;a&nbsp;mathematical function. Using this approach we treat an input signal as a <em>sound</em> and&nbsp;the second signal as&nbsp;a&nbsp;<em>transfer function</em>. Thus, waveshaping is treated as a kind of&nbsp;<strong>complex <em>amplification</em></strong> of an input. This point of view allows us easily describe and&nbsp;model such non-linear processing devices as overdrive, distortion, saturation etc. (and such waveshaping description was historically the first).</p>\r\n<p>But quite other approach to waveshaping is possible as well. We can also perceive the&nbsp;second signal as a pre-recorded or generated <em>sound</em> and the input signal as a <em>control</em> which drives the <em>phase</em> of playing the second signal. From this position waveshaping is thought of as <strong>complex changing of playback <em>time</em> </strong>of the previously sampled sound: the&nbsp;input signal now controls a &lsquo;playback cursor&rsquo; for the loaded sound.</p>\r\n<p></p>\r\n<p>&nbsp;<img alt=\"Fig. 6. Two approaches to waveshaping\" src=\"https://forum.ircam.fr/media/uploads/user/c2e63704b56270dcb51cd336e714548e.png\" width=\"1434\" height=\"1072\" /></p>\r\n<p style=\"text-align: center;\">Fig. 6. Two approaches to waveshaping</p>\r\n<p>&nbsp;</p>\r\n<p>Consequently we can treat both <em>control</em> and <em>transfer</em> signals as <em>sounds</em> altering one another in waveshaping process. When we focus on the first sound we see how the&nbsp;<em>transfer</em> changes its <em>values</em>; and when we focus on the second sound we can observe how the <em>control</em> alters its&nbsp;<em>phase</em> or, better say, <em>playback trajectory</em>. Both explanations are equally valid and describe the same thing (Fig.&nbsp;6).</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>3.2. Granshaping as a &lsquo;doubled&rsquo; granular synthesis.</strong></h3>\r\n<p>So, if both signals can be sounds, we will consider them as such. <strong>Granshaping</strong> is a&nbsp;synthesis where both of these signals are synchronised grains of two different source sounds. Both grains have their individual playing rate (𝑟) and time position (𝜏₀) values but&nbsp;the&nbsp;repetition frequency (𝑓) is the same as the grains are synchronised. As a result, &lsquo;granular waveshaping&rsquo; is like granular synthesis &lsquo;multiplied by&nbsp;two&rsquo;: both sounds have independent settings, for&nbsp;example for randomisation.</p>\r\n<p>Technically, GranShaper is developed in the <strong>Max/MSP</strong> programming environment&sup1;&sup1; as a Max project, utilising the <em>gen</em> extension&sup1;&sup2; for sound processing operations (<em>gen~</em> object). Since our granular engine is implemented as a basic chain of Max gen objects&nbsp;&mdash; &lsquo;<em>phasor</em> &rarr; (scaling) &rarr; <em>peek</em>&rsquo; &mdash; our granshaping engine extends this structure with&nbsp;an&nbsp;additional step: &lsquo;<em>phasor</em> &rarr; (scaling) &rarr; <em>peek</em> &rarr; (scaling) &rarr; another <em>peek</em>&rsquo;.</p>\r\n<p>The result is intensely energetic sound with incredibly rich spectrum.</p>\r\n<p>Here the&nbsp;<strong>parameters</strong> of granular synthesis can get a new interesting interpretation. The <em>control&rsquo;</em>s both amplitude and formant (altered by playing rate) affects to shifting resulting sound spectrum higher or lower (but with different nuances).&nbsp;</p>\r\n<p>Sometimes it makes sense using <strong>extremely low playing rate</strong> for one of the two grains as even using very short grain is enough to&nbsp;create a very saturated sound. When we dramatically reduce playing rate of&nbsp;one of two source sounds, we use grains of very different sizes: one is &lsquo;middle&rsquo; and&nbsp;another one is extremely short. In this case our &lsquo;two approaches&rsquo; to&nbsp;waveshaping are applicable practically; this is most clear on low frequencies. When our <em>transfer</em> grain is quite shorter than the <em>control</em> grain, we can use &lsquo;approach 1&rsquo; i. e. our sound reminds the first source sound &lsquo;processed&rsquo; by the second one: the result sounds close to non-linear effects like distortion. And when we, vice versa, take the <em>control</em> grain very short and <em>transfer</em> is quite longer, the &lsquo;approach 2&rsquo; better describes the situation: our sound looks like DJ scratching, when the second sound is &lsquo;scrubbed&rsquo; or &lsquo;rewinded&rsquo; while playing back.</p>\r\n<p>On the frequencies above 20 Hz the results will be various sounds with huge variety of harmonically rich spectrum.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><strong>4. Granular Shapemorphing</strong></h1>\r\n<p>Let&rsquo;s turn back to waveshaping however. If one of the two signals is a <strong>ramp</strong> (linear function), the result of waveshaping is similar to the other signal. For example when the&nbsp;<em>control</em> is a linear function, that means that the <em>transfer</em> is just being played from the&nbsp;beginning to the end with a constant speed. Actually it turns to what the <em>phasor</em> object does. At the other hand, when the <em>transfer</em> is a ramp any input value is just equal to output one, i. e. the <em>control</em> is just bypassed. Consequently, ramp signal in waveshaping is kind of &lsquo;default&rsquo; signal which changes nothing.&nbsp;</p>\r\n<p>Thus,&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p style=\"padding-left: 200px;\"><em>waveform&nbsp;</em>1&nbsp;&rarr;&nbsp;<em>ramp</em> = <em>waveform</em>&nbsp;1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (2)</p>\r\n<p style=\"padding-left: 200px;\"><em>ramp</em>&nbsp;&rarr;&nbsp;<em>waveform&nbsp;</em>2 = <em>waveform&nbsp;</em>2</p>\r\n<p>&nbsp;</p>\r\n<p>where &lsquo;&rarr;&rsquo; is waveshaping.</p>\r\n<p>We took advantage of this feature by creating a mix of&nbsp;a&nbsp;ramp and some other wavetable in different proportions for each of&nbsp;the&nbsp;interacting granules. By such crossfade between the ramp and a certain wavetable, we change a <em>contribution</em> of this waveform into&nbsp;a synthesis. This trick allows smooth transformation from one sound to&nbsp;the&nbsp;other, from signal 1 to signal 2. When one of them turns to the ramp we just hear another. When we use both source signals without mixing with a ramp we generate a waveshaping sound which is quasi &lsquo;in the middle&rsquo; of the source sounds but it is actually not the &lsquo;middle&rsquo; but quite more energetic than the original signals. Because of the non-linearity of the waveshaping processes, the&nbsp;resulting intermediate sounds could not be simply referred as a mix. We described this process as&nbsp;<strong>&lsquo;granular shapemorphing&rsquo;</strong>.</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Fig. 7. ShapeMorphing algorithm allowing gradual transition between the control and&nbsp;the&nbsp;transfer via&nbsp;using crossfades with ramps\" src=\"https://forum.ircam.fr/media/uploads/user/96abb7f48da66478cb371406e57ed0c1.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: center;\">Fig. 7. ShapeMorphing algorithm allowing gradual transition between the control and&nbsp;the&nbsp;transfer via&nbsp;using crossfades with ramps</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p>The construct depicted on Fig. 7 can assume the following extreme states (Fig. 8):</p>\r\n<p></p>\r\n<p>&nbsp;<img alt=\"Fig. 8. Extreme states of ShapeMorphing engine, when the output is: A. Transfer signal; B.&nbsp;Control&nbsp;signal; C. Product of&nbsp;real waveshaping; D.&nbsp;Just saw waveform \" src=\"https://forum.ircam.fr/media/uploads/user/5ca6031cd582c99f691f8f862c6552a3.png\" width=\"1265\" height=\"1154\" /></p>\r\n<p style=\"text-align: center;\">Fig. 8. Extreme states of ShapeMorphing engine, when the output is: <br />A.&nbsp;Transfer signal; B.&nbsp;Control&nbsp;signal; C.&nbsp;Product of&nbsp;real waveshaping; D.&nbsp;Just saw waveform&nbsp;</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p>All possible intermediate states provide what we could refer as &lsquo;<strong>morphing</strong>&rsquo;&mdash;in&nbsp;quotes indeed, because it is, like our &lsquo;vocoding&rsquo;, neither a spectral technique nor&nbsp;a&nbsp;simple crossfade between two sounds <em>A</em> and <em>B</em> (so it&rsquo;s not a real morphing in&nbsp;a&nbsp;traditional sense). In our scheme, the crossfades are used to&nbsp;make gradual transitions between waveshaping and &lsquo;clean&rsquo; sound and the intermediate sound between the source waveforms <em>A</em> and <em>B</em> is a waveshaping product <em>С</em> which is actually not &lsquo;in&nbsp;between&rsquo; of them two.</p>\r\n<p>Thus, our method of sound transitions forms a 2D space represented by&nbsp;a&nbsp;<em>pictslider</em> in&nbsp;GranShaper. It&rsquo;s depicted on Fig.&nbsp;9:</p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Fig. 9. Fragment of GranShaper interface: \u2028pictslider object for manipulating &amp;lsquo;moprhing&amp;rsquo;\" src=\"https://forum.ircam.fr/media/uploads/user/67bf15c86bdf55acaaa2808bf88c4177.png\" /></p>\r\n<p style=\"text-align: center;\">Fig. 9. Fragment of GranShaper interface: pictslider object for manipulating &lsquo;moprhing&rsquo;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><strong>5. Mathematical details</strong></h1>\r\n<p>It would make sense to include above all the the formulae, but since this article is intended for musicians, we chose not to overload it with mathematical expressions. However, it would still be worthwhile to provide a mathematical description of all the processes outlined here.</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>5.1. Formula for our granular synthesis method</strong></h3>\r\n<p>(without randomisation and&nbsp;other parameters changes):</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝑠&nbsp;(𝑡)&ensp;=&ensp;𝑔&nbsp;( 𝜑(𝑓𝑡)&thinsp;&bull;&thinsp;𝑟&nbsp;/ 𝑓 + 𝜏₀) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (3)</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p>where:</p>\r\n<ul>\r\n<li>𝑠(𝑡) is the resulting signal,</li>\r\n<li>𝑔(𝜏) is the source signal (𝜏 represents the time of the recorded source signal, technically the sample number, as opposed to 𝑡, which is the real time of&nbsp;the&nbsp;resulting sound),</li>\r\n<li>𝑓 is the repetition frequency in Hz,</li>\r\n<li>𝑟 is the playback speed, defined as 𝑙&thinsp;&bull;𝑓, where 𝑙 is a grain length (see formula (1)), being the difference between 𝜏<sub>m</sub> and&nbsp;𝜏₀, where 𝜏<sub>m</sub> is the end of the grain, a&nbsp;temporal position in the source sound corresponding to the playback end, while 𝜏₀ is the start of the grain,</li>\r\n<li>𝜑(𝑓𝑡) is the phase, normalised to range 0&hellip;1, as a function of time 𝑡, scaled by&nbsp;the&nbsp;repetition frequency 𝑓 (actually, the Max&rsquo;s <em>phasor</em> object function); <br />so&nbsp;𝜑(𝑓𝑡)&nbsp;/ 𝑓 represents the time &lsquo;wrapped&rsquo; to a single grain.</li>\r\n</ul>\r\n<p>&nbsp;</p>\r\n<p>The full formula without &lsquo;𝑟&rsquo; looks like follows:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝑠&nbsp;(𝑡)&ensp;=&ensp;𝑔&nbsp;( 𝜑(𝑓𝑡)&thinsp;&bull;&thinsp;𝑙 + 𝜏₀)&ensp;=&ensp;𝑔&nbsp;( 𝜑(𝑓𝑡)&thinsp;&bull;&thinsp;(𝜏<sub>m</sub> &ndash; 𝜏₀) + 𝜏₀) &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; (4)</p>\r\n<p>&nbsp;</p>\r\n<p>To this formula, various randomisations and parameter modifications are to be added.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>5.2. For example, for granular &lsquo;vocoding,&rsquo;</strong></h3>\r\n<p>gradual modification of the initial playback position 𝜏₀ is necessary. If 𝜏₀ changes from the beginning to the end at a normal playback rate, then:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝜏₀&ensp;=&ensp;𝛱&thinsp;&bull;&thinsp;&lfloor;𝑡/𝑃&rfloor; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (5)</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p>where:</p>\r\n<ul>\r\n<li>𝛱 is the &lsquo;period&rsquo; inside the internal time 𝜏 of the recorded sound, i.&thinsp;e., the&nbsp;temporal distance between the starts of two adjacent grains within the source sound (for&nbsp;normal playback speed, 𝛱 = 𝑃, then 𝜏₀ is close to 𝑡),</li>\r\n<li>&lfloor; &rfloor; denotes rounding down to the nearest integer (truncating), ensuring that 𝜏₀ changes discretely, once per period (to prevent changes in 𝜏₀ from affecting the&nbsp;actual grain playback speed). Technically, this rounding is performed by&nbsp;the&nbsp;<em>sah</em> (&lsquo;sample-and-hold&rsquo;) object in Max gen and MSP.</li>\r\n</ul>\r\n<p>&nbsp;</p>\r\n<p>So, the general formula for playback in granular &lsquo;vocoding&rsquo; is</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝑠&nbsp;(𝑡)&ensp;=&ensp;𝑔&nbsp;( 𝜑(𝑓𝑡)&thinsp;&bull;&thinsp;𝑟&nbsp;/ 𝑓 + 𝛱&thinsp;&bull;&thinsp;&lfloor;𝑡/𝑃&rfloor;) &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; (6)</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<h3><strong>5.3. The granshaping formula</strong></h3>\r\n<p>represents a &lsquo;doubling&rsquo; granular synthesis. Here, 𝑔(𝜏) is the already familiar granular synthesis function representing the <em>control</em>, while 𝑘(𝜏<em><sub>k</sub></em>) is a similar granular synthesis function applied to a different pre-recorded sound representing the <em>trasnsfer</em>, where its playback parameters (rate 𝑟<em><sub>k</sub></em> and initial grain start time 𝜏₀<em><sub>k</sub></em>) can be set independently:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝑠&nbsp;(𝑡)&ensp;=&ensp;𝑘 ([&frac12;&thinsp;𝑔&thinsp;( 𝜑(𝑓𝑡)&thinsp;&bull;&thinsp;𝑟&nbsp;/ 𝑓 + 𝜏₀) + &frac12;]&thinsp;&bull;&thinsp;𝑟<em><sub>k</sub></em> + 𝜏₀<em><sub>k</sub></em>) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<span style=\"display: inline !important;\"> &nbsp; &nbsp;</span>(7)</p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p>Manipulations with &frac12; are just scaling the <em>control</em> signal 𝑔 (𝜏) from &ndash;1&hellip;1 to 0&hellip;1 range to&nbsp;be valid for &lsquo;phasing&rsquo; the <em>transfer</em> signal 𝑘 (𝜏<em><sub>k</sub></em>).</p>\r\n<p>&nbsp;</p>\r\n<h3><strong>5.4. GranShapeMorphing.</strong></h3>\r\n<p>The formula below generalises the previous formula (7) by including a ramp 𝜑(𝑡)&nbsp;&mdash; its value is equal to phase (normalised to the range 0&hellip;1; technically we mix the <em>phasor</em> output to the sound)&nbsp;&mdash; and crossfading signal weights 𝐴 (for the <em>control</em>) and 𝐴<em><sub>k</sub></em> (for&nbsp;the&nbsp;<em>transfer</em>):</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: left; padding-left: 440px;\">𝑠&nbsp;(𝑡)&ensp;=&ensp;𝐴<em><sub>k</sub></em>&thinsp;&bull;&thinsp;𝑘 ([&frac12; 𝐴&thinsp;&bull;&thinsp;𝑔 ( 𝜑(𝑓𝑡)&thinsp;&bull;&thinsp;𝑟&nbsp;/ 𝑓 + 𝜏₀) + &frac12;]&thinsp;&bull;&thinsp;𝑟<em><sub>k</sub></em> + (1&ndash;𝐴)&thinsp;&bull;&thinsp;𝜑(𝑓𝑡) + 𝜏₀<em><sub>k</sub></em>)&ensp;+&ensp;<br />+&ensp;(1&ndash;𝐴<em><sub>k</sub></em>)&thinsp;&bull;&thinsp;𝜑 ([&frac12; 𝐴&thinsp;&bull;&thinsp;𝑔 ( 𝜑(𝑓𝑡)&thinsp;&bull;&thinsp;𝑟&nbsp;/ 𝑓 + 𝜏₀) + &frac12;]&thinsp;&bull;&thinsp;𝑟<em><sub>k</sub></em> + (1&ndash;𝐴)&thinsp;&bull;&thinsp;𝜑(𝑓𝑡) ) &nbsp; &nbsp; &nbsp;<span style=\"display: inline !important;\"> &nbsp; &nbsp; &nbsp; &nbsp; </span>(8)</p>\r\n<p style=\"text-align: left; padding-left: 440px;\">&nbsp;</p>\r\n<p>Indeed this is a cumbersome formula, but it can be simplified by introducing a&nbsp;substitution:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">([&frac12; 𝐴&thinsp;&bull;&thinsp;𝑔 ( 𝜑(𝑓𝑡)&nbsp;𝑟&nbsp;/ 𝑓 + 𝜏₀) + &frac12;] 𝑟<em><sub>k</sub></em> + (1&ndash;𝐴)&thinsp;&bull;&thinsp;𝜑(𝑓𝑡) )&ensp;= 𝐦 &nbsp; &nbsp; &nbsp; &nbsp;<span style=\"display: inline !important;\"> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</span>&nbsp;(9)</p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p>where 𝐦 represents a &lsquo;mixed input&rsquo;&nbsp;&mdash; a weighted sum of the control signal 𝑔(𝜏) and&nbsp;the&nbsp;ramp 𝜑(𝑡). Then:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝑠&nbsp;(𝑡)&ensp;=&ensp;𝐴ₖ&thinsp;&bull;&thinsp;𝑘&nbsp;(𝐦 + 𝜏₀<em><sub>k</sub></em>)&ensp;+&ensp;(1&ndash;𝐴<em><sub>k</sub></em>)&thinsp;&bull;&thinsp;𝜑&nbsp;(𝐦) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (10)</p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<h1><strong>6. Randomization</strong></h1>\r\n<p>Admittedly, granular synthesis has been inconceivable without statistical and&nbsp;probabilistic procedures. While formulating the principles of granular synthesis, Xenakis introduced the concepts of <em>ataxy</em> (order or disorder) and of&nbsp;<em>cloud of grains</em>&sup1;&sup3;. In&nbsp;<strong>GranShaper</strong>, we use four kinds of&nbsp;randomisation for each of three main parameters&nbsp;&mdash; grain repetition frequency (𝑓), playing rate (𝑟) and grain time position (time offset; 𝜏₀):</p>\r\n<ul>\r\n<li>&lsquo;dispersion&rsquo;, i.&thinsp;e. parameter range, &Delta; (units depend on the parameter);</li>\r\n<li>frequency of parameter change, &lsquo;tempo&rsquo;, 𝐹 (Hz; values like 1000 Hz could be understood as&nbsp;&lsquo;continuous change&rsquo; as we cannot hear events more often than 20 times per&nbsp;seconds; all values below forms a constant rhythm of changes);</li>\r\n<li>&lsquo;dispersion&rsquo; of changing frequency, i.&thinsp;e. tempo range of frequencies of&nbsp;the&nbsp;parameter changes or &lsquo;tempo range&rsquo;; it could be also called &lsquo;rhythmic acuity&rsquo; or &lsquo;durations contrast&rsquo;, because this parameter makes a&nbsp;transition between equal rhythm of changes to a rhythm with very different durations, &Delta;𝐹 (octaves);</li>\r\n<li>frequency of change of diffusion of frequency of changing a parameter: with&nbsp;the&nbsp;high values of this variable the &lsquo;rhythm&rsquo; changes in every unit, but with the&nbsp;low values it could changes rarely, allowing sequence with same durations in&nbsp;a row, also &lsquo;rhythm irregularity&rsquo;, 𝐼 (Hz).</li>\r\n</ul>\r\n<p>We use Max <em>noise</em> object to generate random signal in a range &ndash;1&hellip;1. So, let 𝛕₀, 𝒇, 𝒓, 𝑭 are respectively time position, repetition frequency, playing rate and frequency of&nbsp;a&nbsp;parameter changes <em>after</em> randomisation while 𝜏₀, 𝑓, 𝑟, 𝐹 are the same parameters <em>before</em> randomisation (parameters being set); &lsquo;noise&rsquo; is a function of time 𝑡 scaled by&nbsp;𝑭 or&nbsp;𝐼 and quantised&nbsp;by &lfloor;&rfloor;&nbsp;with <em>sah</em> object for preventing changes within a grain to avoid distortions. &nbsp;</p>\r\n<p>Then the final time position is:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"padding-left: 400px;\">𝛕₀ (𝑡)&ensp;=&ensp;𝜏₀ + noise (&lfloor;𝑭<sub>𝜏</sub><sub>&nbsp;</sub>𝑡&rfloor;)&thinsp;&bull;&thinsp;&Delta;𝜏₀ &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;(11)</p>\r\n<p style=\"padding-left: 400px;\">𝑭 <sub>𝜏</sub><sub> </sub>(𝑡)&ensp;=&amp;ensp;𝐹<sub>𝜏</sub> &bull; 2&thinsp;^&thinsp;(noise (&lfloor;𝐼<sub>𝜏</sub><sub>&nbsp;</sub>𝑡&rfloor;)&thinsp;&bull;&thinsp;&Delta;𝐹<sub>𝜏</sub>) &nbsp; &nbsp; &nbsp; &nbsp; (12)</p>\r\n<p>&nbsp;</p>\r\n<p>the final repetition frequency:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"padding-left: 400px;\">𝒇 (𝑡)&ensp;=&ensp;𝑓 &bull; 2&thinsp;^&thinsp;(noise (&lfloor;𝑭<sub>𝑓</sub><sub>&nbsp;</sub>𝑡&rfloor;)&thinsp;&bull;&thinsp;&Delta;𝑓 ) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (13)</p>\r\n<p style=\"padding-left: 400px;\">𝑭<sub>𝑓</sub> (𝑡)&ensp;=&amp;ensp;𝐹 &bull; 2&thinsp;^&thinsp;(noise (&lfloor;𝐼<sub>𝑓</sub><sub>&nbsp;</sub>𝑡&rfloor;)&thinsp;&bull;&thinsp;&Delta;𝐹<sub>𝑓</sub> ) &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; (14)</p>\r\n<p>&nbsp;</p>\r\n<p>the final playing rate:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"padding-left: 400px;\">𝒓 (𝑡)&ensp;=&ensp;𝑟 &bull; 2&thinsp;^&thinsp;(noise (&lfloor;𝑭<sub>𝑟</sub><sub>&nbsp;</sub>𝑡&rfloor;)&thinsp;&bull;&thinsp;&Delta;𝑟) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;(15)</p>\r\n<p style=\"padding-left: 400px;\">𝑭<sub>𝑟</sub>(𝑡)&ensp;=&amp;ensp;𝐹<sub>𝑟</sub> &bull; 2&thinsp;^&thinsp;(noise (&lfloor;𝐼<sub>𝑟</sub><sub>&nbsp;</sub>𝑡&rfloor;)&thinsp;&bull;&thinsp;&Delta;𝐹<sub>𝑟</sub>) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (16)</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><strong>7. Windowing: overdrive and &lsquo;staccato&rsquo;</strong></h1>\r\n<p>A common solution for avoiding clicks on the grain borders is multiplying the grain to&nbsp;a&nbsp;window function to create fade-ins and fade-outs&sup1;⁴. We apply very common von&nbsp;Hann window&sup1;⁵ for that purpose.&nbsp;</p>\r\n<p>But for our taste, ordinary windowing sometimes makes granular sound too sterile, causing it to lose some of its raw energy&mdash;energy that is preserved in the <em>Kontakte</em> example in Fig. 2. Clicks or transients at grain borders contribute to&nbsp;the sound spectrum, making it richer when 𝑓 rises above 20 Hz. To preserve both the possibility of using a&nbsp;standard Hanning window and approaching a rectangular one, while nonetheless avoiding clicks at lower frequencies, we apply <strong>overdrive</strong> to our window by multiplying it by a coefficient and transforming it with a hyperbolic tangent. This operation &lsquo;fattens&rsquo; the&nbsp;window, making the fades shorter, but they never disappear, as the&nbsp;hyperbolic tangent is a&nbsp;smooth function (see Fig.&nbsp;10). This witty solution has been suggested by&nbsp;composer and&nbsp;media artist Alex Nadzharov&sup1;⁶.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"Fig. 10. Von Hann window overdriven by&nbsp;hyperbolic tangent with different scaling before\" src=\"https://forum.ircam.fr/media/uploads/user/32ba25354a21172d1bee308686fd3c35.png\" width=\"1721\" height=\"248\" /></p>\r\n<p style=\"text-align: center;\">Fig. 10. Von Hann window overdriven by&nbsp;hyperbolic tangent with different scaling before</p>\r\n<p>&nbsp;</p>\r\n<p>The other kind of window transformation creates a &lsquo;<strong>staccato</strong>&rsquo; effect, where a grain is cut off before reaching its full duration. At lower frequencies, this produces a true staccato, while above 20 Hz, it results in a &lsquo;thinner&rsquo; sound with a redistribution of&nbsp;spectral energy towards higher frequencies. Technically, this effect is achieved by&nbsp;accelerating the window playback, so it reaches its end before the grain is fully played, causing the remaining part of the grain to be silenced.</p>\r\n<p>Finally, windowed sound with applied combination of window overdrive and&nbsp;&lsquo;staccato&rsquo; can be expressed as follows:</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">𝑤&nbsp;(𝑡)&ensp;=&ensp;𝑐 &bull; tanh [𝑎 &bull; Hann&nbsp;(𝑞𝑡)] &bull; 𝑠&nbsp;(𝑡) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (16)</p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p>where:</p>\r\n<ul>\r\n<li>𝑤(𝑡)&nbsp;is windowed sound,</li>\r\n<li>𝑎 is an overdrive coefficient (overdrive factor),</li>\r\n<li>𝑐&nbsp;is a certain amplitude compensation after applying tanh (which reduces sound level on low 𝑎 values),</li>\r\n<li>Hann(𝑡)&nbsp;is the von Hann window function (we store it pre-generated in a buffer, so&nbsp;we don&rsquo;t need to calculate this function every time),</li>\r\n<li>𝑞&nbsp;is &lsquo;staccato&rsquo; window playing accelerator: as it equals to 1, there is no &lsquo;staccato&rsquo;&mdash;the window is being played full grain length; the higher the value, the faster the&nbsp;window is played, shortening the actual duration of the grain,</li>\r\n<li>𝑠(𝑡) is unwindowed signal (obtained by formulas (8), (10) + randomisation).</li>\r\n</ul>\r\n<p>&nbsp;</p>\r\n<h1><strong>8. &lsquo;MMMM&rsquo;</strong></h1>\r\n<p>Technically, the GranShaper is implemented within the &lsquo;<strong>MMMM</strong>&rsquo; format, which we designed. It provides a standard framework for building flexible, microtonal synthesisers within Max-MSP, controllable via MIDI. Actually, &lsquo;MMMM&rsquo; (&lsquo;Max MIDI Music Microtonal&rsquo;) is a standardised Max patch where one can put their own synthesiser engine without worrying about polyphony, MIDI control, or microtonal pitch remapping (as shown on&nbsp;Fig.&nbsp;11). The MMMM patch:</p>\r\n<ul>\r\n<li>processes MIDI polyphony, making it work like a traditional keyboard-controlled synthesizer;</li>\r\n<li>handles microtonal pitch mapping, computing frequencies for non-12&nbsp;TET tuning systems;</li>\r\n<li>provides flexible MIDI control, including pitch bend and modulation wheel, with&nbsp;adjustable range settings;</li>\r\n<li>expands sustain pedal functionality, offering four distinct modes, including&nbsp;sostenuto.</li>\r\n</ul>\r\n<p>The key advantage of &lsquo;MMMM&rsquo; is simple integration: any oscillator or synthesis engine can be inserted into this framework, instantly gaining all these advanced MIDI features without requiring additional programming. This allows for rapid development of diverse synthesisers, all sharing a unified control interface.</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Fig. 11. &amp;lsquo;MMMM&amp;rsquo; subpatch for putting a new synthesiser engine into standardised interface\" src=\"https://forum.ircam.fr/media/uploads/user/66f9d087656fda1d3454c17697a5872e.png\" /></p>\r\n<p style=\"text-align: center;\">Fig. 11. &lsquo;MMMM&rsquo; subpatch for putting a new synthesiser engine into standardised interface</p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p>For GranShaper, &lsquo;MMMM&rsquo; provides &lsquo;normal&rsquo; polyphony, but inside each voice, we introduce a&nbsp;multitude of grains. Thus we can play either &lsquo;normal tones&rsquo; (with &lsquo;MMMM&rsquo; microtonal and MIDI keyboard tools) or all granular and shaping features: textures, probability, complex repetitions, overtone glissandi, timbre transitions etc.</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Fig. 12. GranShaper interface fragment\" src=\"https://forum.ircam.fr/media/uploads/user/b636a1205e8cf729b4b78faa65e51f52.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"1505\" height=\"1035\" /></p>\r\n<p style=\"text-align: center;\">Fig. 12. GranShaper interface fragment</p>\r\n<p>&nbsp;</p>\r\n<p>&lsquo;MMMM&rsquo; is still in progress, but we hope that in the future it will enable the&nbsp;creation of an&nbsp;open&nbsp;community of developers who will contribute new synthesisers within this framework and help build a large open library. As &lsquo;MMMM&rsquo; simplifies making interface, polyphony and all MIDI stuff, it will allow for faster creation of new synthesisers.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><strong>Conclusion</strong></h1>\r\n<p>In this article, we began our exploration with well-known concepts such as&nbsp;granular synthesis and waveshaping in general, but we also introduced our own perspective on these topics. Building on these foundations, we proposed and described several new approaches to sound synthesis and sound processing techniques, including granular &lsquo;vocoding&rsquo;, granshaping, and granshapemorphing. Finally, we presented their technical implementation in Max as the GranShaper synthesizer, based on the &lsquo;MMMM&rsquo; format.</p>\r\n<p>GranShaper is still a work in progress&mdash;we have much testing and refinement ahead. Thus, this article serves more as a presentation of new synthesis strategies and principles rather than a finalised product. Stay tuned for updates and announcements regarding GranShaper and &lsquo;MMMM&rsquo;.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><em><strong>Notes</strong></em></h1>\r\n<p><sup>1</sup> Here are just a few examples: <em>Monolake Granulator</em> by Robert Henke (Max-for-Live Device) &mdash; <a href=\"https://www.ableton.com/en/packs/granulator-ii/\">https://www.ableton.com/en/packs/granulator-ii/</a>, <em>Borderland Granular</em> by&nbsp;Chris Carlson &mdash; <a href=\"http://www.borderlands-granular.com/app/\">http://www.borderlands-granular.com/app/</a>, <em>Crusher-X</em> by&nbsp;AccSone &mdash; <a href=\"https://accsone.com/\">https://accsone.com</a>, <em>grainflow</em>, <em>LowkeyNW</em>, and <em>petra</em> (by&nbsp;Circuit Music ) for Max&nbsp;&mdash; see Max Packages. Meanwhile, commercially driven online music resources are full of titles like &lsquo;17 Best Granular VST Plugins&rsquo; (e.&nbsp;g., <a href=\"https://www.musicindustryhowto.com/granular-vst-plugins/\">https://www.musicindustryhowto.com/granular-vst-plugins/</a>).</p>\r\n<p><sup>2</sup> See Xenakis (1992, <em>54, 58, 103, 373</em>); Roads (2001, <em>22, 27&ndash;28</em>).</p>\r\n<p><sup>3</sup> See Xenakis (1992, <em>43&ndash;109</em>); Roads (1996, <em>196</em>).</p>\r\n<p><sup>4</sup> See Roads (1996, <em>196</em>).</p>\r\n<p><sup>5</sup> See ibid.</p>\r\n<p><sup>6</sup> Xenakis (1992, <em>43</em>).</p>\r\n<p><sup>7</sup>&nbsp;See Stockhausen (1957, <em>10&ndash;40</em>).</p>\r\n<p><sup>8</sup> See Xenakis (1992, <em>103&ndash;109</em>).</p>\r\n<p><sup>9</sup>&nbsp;See: Arfib (1978), (1979, <em>757&ndash;768</em>)</p>\r\n<p><sup>10</sup>&nbsp;Le Brun (1979, <em>250&ndash;266</em>).</p>\r\n<p><sup>11</sup> See <a href=\"https://cycling74.com/products/max\">https://cycling74.com/products/max</a>.</p>\r\n<p><sup>12</sup> See <a href=\"https://docs.cycling74.com/legacy/max8/vignettes/gen_topic\">https://docs.cycling74.com/legacy/max8/vignettes/gen_topic</a>.</p>\r\n<p><sup>13</sup> See Xenakis (1992, <em>63&ndash;68</em>, <em>182, 12</em>).</p>\r\n<p><sup>14</sup> See Roads (2001, <em>87&ndash;90</em>).</p>\r\n<p><sup>15</sup>&nbsp;On the von Hann function, see: Blackman, Tukey (1958, <em>200&ndash;201</em>).</p>\r\n<p><sup>16</sup> See Alex Nadzharov&rsquo;s website: <a href=\"http://alexnadzharov.com/\">http://alexnadzharov.com/</a>.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<h1><em><strong>References</strong></em></h1>\r\n<p><strong>1. Arfib, Daniel. </strong>Digital Synthesis of Complex Spectra by Means of Multiplication of&nbsp;Nonlinear Distorted Sine Waves. Proceedings of the 1978 International Computer Music Conference (ICMC), 1978. <a href=\"https://quod.lib.umich.edu/i/icmc/bbp2372.1978.009/1\">https://quod.lib.umich.edu/i/icmc/bbp2372.1978.009/1</a>.<br />&mdash;&mdash;&mdash;. Digital Synthesis of Complex Spectra by Means of Multiplication of Nonlinear Distorted Sine Waves. Journal of the Audio Engineering Society 27, no. 10 (1979): <br /><em>757&ndash;768</em>. <a href=\"https://aes2.org/publications/elibrary-page/?id=3178\">https://aes2.org/publications/elibrary-page/?id=3178</a>.</p>\r\n<p><strong>2. Blackman, R. B., Tukey, J. W. </strong>The Measurement of Power Spectra from&nbsp;the&nbsp;Point of&nbsp;View of Communications Engineering &ndash; Part I. Bell System Technical Journal 37, no. 1 (January 1958): <em>185&ndash;282</em>. <a href=\"https://archive.org/details/bstj37-1-185/page/n15/mode/2up\">https://archive.org/details/bstj37-1-185/page/n15/mode/2up</a>.</p>\r\n<p><strong>3. Le Brun, Marc. </strong>Digital Waveshaping Synthesis. Journal of the Audio Engineering Society 27, no. 4 (1979): <em>250&ndash;266</em>.</p>\r\n<p><strong>4. Roads, Curtis. </strong>Computer Music Tutorial. Cambridge, MA: MIT Press, 1996. <a href=\"https://mitpress.mit.edu/9780262680820/the-computer-music-tutorial/\">https://mitpress.mit.edu/9780262680820/the-computer-music-tutorial/</a>.</p>\r\n<p><strong>5. Roads, Curtis. </strong>Microsound<em>.</em> Cambridge, MA: MIT Press, 2001. <a href=\"https://mitpress.mit.edu/9780262681544/microsound/\">https://mitpress.mit.edu/9780262681544/microsound/</a>.</p>\r\n<p><strong>6. Stockhausen, Karlheinz. </strong>&hellip;how time passes&hellip; Die Reihe 3 (1957): <em>10&ndash;40</em>. ISSN&nbsp;0486-3267.</p>\r\n<p><strong>7. Xenakis, Iannis. </strong>Formalized Music: Thought and Mathematics in Composition<em>.</em> Edited by Sharon Kanach. Hillsdale, NY: Pendragon Press, 1992. <br /><a href=\"https://www.pendragonpress.com/formalized-music.html\">https://www.pendragonpress.com/formalized-music.html</a>.</p>",
        "topics": [
            {
                "id": 2679,
                "name": "gen",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2511,
                "name": "granshaper",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2676,
                "name": "granular_synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2678,
                "name": "granular_vocoding",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2677,
                "name": "granular_waveshaping",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2680,
                "name": "how_time_passes",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2513,
                "name": "morphing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2514,
                "name": "vocoding",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2512,
                "name": "waveshaping",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4600,
            "forum_user": {
                "id": 4597,
                "user": 4600,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Nikolay_Khrust_croped.By_Alexander_Petrovski.jpg",
                "avatar_url": "/media/cache/e3/4e/e34e0ec430b7df66547910e040ee6481.jpg",
                "biography": "Nikolai Khrust (1982, Moscow, Russia — St.-Chamond, France): \ncomposer, sound artist, computer music designer, Ph.D. in music studies, former associate professor at the Moscow Conservatory (till 2022).\n \nKhrust graduated from the Moscow Conservatory and its post graduate classes as a composer (prof. V. Tarnopolski). His creative portfolio spans instrumental and vocal compositions, electroacoustic works, sound installations, multimedia projects, and sound design. He has received numerous accolades, including the Moscow Art Prize (2021, the first case the award was granted for a musical composition).\n\nKhrust’s works have been performed throughout Europe at major festivals, including the Venice Biennale, Darmstadt Summer Courses, and ISCM World Music Days, and by such ensembles as MCME, Studio for New Music Moscow, Ensemble Aleph. His residencies include GRAME (Lyon) and CIRM (Nice). His scientific works covers extended techniques, musical phenomenology, multimedia composition; his theory is being taught in several Russian high schools. \n\nKhrust founded Octopus and King Bee ensembles. As a performer Khrust collaborated with known conductors such as T. Currentzis and V. Jurowski.",
                "date_modified": "2025-09-07T13:18:54.359024+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "NikolayKhrust",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "granshaper-synth-new-strategies-of-granular-synthesis-and-derivative-types-of-sound-synthesis-granular-vocoder-granular-waveshaping-and-granular-shape-morphing",
        "pk": 3307,
        "published": true,
        "publish_date": "2025-02-25T15:05:06+01:00"
    },
    {
        "title": "The Manifesto of New-Art I",
        "description": "This is a Manifesto of a new Kind of Art and Music.\nWe will try to explore the Whole Power of Human Intellect by Art Music Performance and Poetic.\nSo when you will have fun to explore the remained 90% of your Intellect enjoy and write a short E-Mai to THLKunst@web.de",
        "content": "<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/b8e011a618f9fac2c331eeae1d09b42f.jpg\" alt=\"\" width=\"161\" height=\"175\" /> The Manifesto of&nbsp;New Art.:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who we are:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We are Artists and Computer Programmers</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We are Computer Nerds for the Society of Artists &ndash; And we are hobby Artists for the Guild of Computer Nerds. So you could better call us Idiots who make Noise and Screens wich locks and sound like the Laptop is crashed down.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Computer-Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Wikipedia is tell: <a href=\"https://en.wikipedia.org/wiki/Computer_music\">https://en.wikipedia.org/wiki/Computer_music</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Computer aided Composition: <a href=\"https://web.archive.org/web/20070927001256/http://www.flexatone.net/docs/nlcaacs.pdf\">https://web.archive.org/web/20070927001256/http://www.flexatone.net/docs/nlcaacs.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The C-Sound Books a entry to it: <a href=\"https://web.archive.org/web/20100102064621/http://csounds.com/shop/csound-book\">https://web.archive.org/web/20100102064621/http://csounds.com/shop/csound-book</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Electronic -Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Wikipedia is tell:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Electronic_music\">https://en.wikipedia.org/wiki/Electronic_music</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The History-Of Computer-Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://120years.net/\">http://120years.net/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A History to dig deeper:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.personal.psu.edu/meb26/INART55/timeline.html\">http://www.personal.psu.edu/meb26/INART55/timeline.html</a>#</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Computer-Aided Compositing of Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Chronology of Computer-Aided Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.doornbusch.net/chronology/\">http://www.doornbusch.net/chronology/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A dive to the History of Computational Music Systems:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.academia.edu/34234640/MuSA_2017_Conference_-_Early_Computer_Music_Experiments_in_Australia_England_and_the_USA\">https://www.academia.edu/34234640/MuSA_2017_Conference_-_Early_Computer_Music_Experiments_in_Australia_England_and_the_USA</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Challenge to Program a Computer for Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://soundlab.cs.princeton.edu/publications/on-the-fly_nime2004.pdf\">https://soundlab.cs.princeton.edu/publications/on-the-fly_nime2004.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Yamaha_CX5M_Music_Computer_set,_MIM_Brussels.jpg\">https://commons.wikimedia.org/wiki/File:Yamaha_CX5M_Music_Computer_set,_MIM_Brussels.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/3223877378577083288f14915f33115a.jpg\" alt=\"\" width=\"344\" height=\"274\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So this can you believe, but what is Art: Art is the Play with the Possibiity. And it is most often this Play. A Play at the Boundary's of the Sense of Order of Systematic and the Understandable. So most often the first Consumer of the Art Act would tell there are no Sense. He will find no Sense in the System of a Art Work. And another Consumer tells this same System is the fascinating Aspect of a Act. So you should try to discover our Art by yourself. Because of that, only you can make Your Experience. And at the End of our Manifesto you will know why. Why only You have the Right to make Your Experience.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">To Try a Definition of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thoughtco.com/what-is-the-definition-of-art-182707\">https://www.thoughtco.com/what-is-the-definition-of-art-182707</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Second Definition of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theartist.me/art/what-is-art\">https://www.theartist.me/art/what-is-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Set of Ideas:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.brainpickings.org/2012/06/22/what-is-art\">https://www.brainpickings.org/2012/06/22/what-is-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Play of Art with the Possibility:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Splunk the Possible of modern Computing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.splunk.com/en_us/solutions/art-of-the-possible.html\">https://www.splunk.com/en_us/solutions/art-of-the-possible.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art at the Possibility of Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellmind.com/what-is-art-therapy-2795755\">https://www.verywellmind.com/what-is-art-therapy-2795755</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Way of Rethinking by Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.bbc.com/future/article/20190521-how-art-and-culture-can-help-us-rethink-time\">https://www.bbc.com/future/article/20190521-how-art-and-culture-can-help-us-rethink-time</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Play of Art with Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Logic is an Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/Why-is-logic-considered-to-be-an-art\">https://www.quora.com/Why-is-logic-considered-to-be-an-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Has Art a Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/Does-logic-exist-in-arts\">https://www.quora.com/Does-logic-exist-in-arts</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Web.Page about Philosophical Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.iep.utm.edu/category/s-l-m/logic\">https://www.iep.utm.edu/category/s-l-m/logic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is Art understandable:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to beginn to Draw:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.artistsnetwork.com/art-mediums/drawing/10-different-drawing-approaches\">https://www.artistsnetwork.com/art-mediums/drawing/10-different-drawing-approaches</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Approach&rsquo;s to Art in School:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://art21.org/for-educators/tools-for-teaching/getting-started-an-introduction-to-teaching-with-contemporary-art/contemporary-approaches-to-teaching\">https://art21.org/for-educators/tools-for-teaching/getting-started-an-introduction-to-teaching-with-contemporary-art/contemporary-approaches-to-teaching</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Approach to Contemporary Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://news.harvard.edu/gazette/story/2001/12/contemporary-approach-to-art\">https://news.harvard.edu/gazette/story/2001/12/contemporary-approach-to-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Chehel_Sotoun_ceiling.jpg\">https://commons.wikimedia.org/wiki/File:Chehel_Sotoun_ceiling.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/78ffd84a1d501d22e58e9decf8e7797c.png\" alt=\"\" width=\"344\" height=\"230\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We also Artists, but what is Art: Art we have just said it is the Play with the Possibility. So what&rsquo;s the Possibility? It is in our Sense the Force of logical Structure&rsquo;s. And yes later we will left as last the Boundary of Logic Breakdown. In the following we will tell you about this Boundary&rsquo;s. The Boundary&rsquo;s of your Intellect. And with this Boundary&rsquo;s we will begin to Play.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s logical Structures:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia tells:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Structure_(mathematical_logic\">https://en.wikipedia.org/wiki/Structure_(mathematical_logic</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophy of Logical Structures:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/logic-classical/#4\">https://plato.stanford.edu/entries/logic-classical/#4</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Course in universal Algebra:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.math.uwaterloo.ca/~snburris/htdocs/UALG/univ-algebra2012.pdf\">http://www.math.uwaterloo.ca/~snburris/htdocs/UALG/univ-algebra2012.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Logical Structures in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Logic in Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://courses.lumenlearning.com/hostos-edu104/chapter/logic-and-structure-2\">https://courses.lumenlearning.com/hostos-edu104/chapter/logic-and-structure-2</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Logic Structures in minimal Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.worcesterart.org/exhibitions/past/minimalism.html\">https://www.worcesterart.org/exhibitions/past/minimalism.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia about Zero&rsquo;s Art of Structure:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Zero_(art\">https://en.wikipedia.org/wiki/Zero_(art</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Mizar_MathWiki_screenshot.png\">https://commons.wikimedia.org/wiki/File:Mizar_MathWiki_screenshot.png</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/451d5d07d1c68641e25ac5f37d74a2fb.png\" alt=\"\" width=\"344\" height=\"305\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But we are not a Religious Society. We are a loose network of Artists Musician Composers Theorists and even Computer Nerds. And our Goal is to explore this Boundary&rsquo;s of Mind and Logic.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art at the Boundary&rsquo;s of Logic and Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.pitt.edu/~belnap/nal.pdf\">http://www.pitt.edu/~belnap/nal.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Art of Logic in the Internet:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.artoflogic.com/2020/03\">https://www.artoflogic.com/2020/03</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Logic of Boundary&rsquo;s:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://iconicmath.com/logic/boundary\">http://iconicmath.com/logic/boundary</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/67117ef800f42962bcd716b223427833.png\" alt=\"\" width=\"77\" height=\"68\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">What&rsquo;s our Idols:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Just the first three Question with which all the ancient Philosophy would begin:</p>\n<ol>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Question of the History about where we came from.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Question about that what would Being and that what wouldn&rsquo;t Being.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And the Question about the Future and the Question about the Sense of all.</p>\n</li>\n</ol>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The basic Question of Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Introduction to Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.roangelo.net/logwitt/first-question-philosophy.html\">https://www.roangelo.net/logwitt/first-question-philosophy.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Questions in a Philosophic View:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/questions/#QueAnsPre\">https://plato.stanford.edu/entries/questions/#QueAnsPre</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophical Questions:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://conversationstartersworld.com/philosophical-questions\">https://conversationstartersworld.com/philosophical-questions</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Where is the Beginning of all:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Introduction to the Book Genesis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.learnreligions.com/book-of-genesis-701143\">https://www.learnreligions.com/book-of-genesis-701143</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Web Page for the whole ( astronomic ) Space:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://courses.lumenlearning.com/astronomy\">https://courses.lumenlearning.com/astronomy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction in the Omega-Point Theory:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://micahredding.com/blog/omega-point-theory\">http://micahredding.com/blog/omega-point-theory</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Being and waht&rsquo;s Not.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction to Metaphysics:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://academyofideas.com/2013/06/introduction-to-metaphysics\">https://academyofideas.com/2013/06/introduction-to-metaphysics</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Metaphysics.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://whatismetaphysics.com/\">http://whatismetaphysics.com</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Aristotle Metaphysics:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://archive.org/stream/aristotelesmeta00arisgoog/aristotelesmeta00arisgoog_djvu.txt\">https://archive.org/stream/aristotelesmeta00arisgoog/aristotelesmeta00arisgoog_djvu.txt</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Sense of it all:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Albert Einsteins View of the World:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://genius.com/Albert-einstein-the-world-as-i-see-it-annotated\">https://genius.com/Albert-einstein-the-world-as-i-see-it-annotated</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Give the World a Sense:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://assets.press.princeton.edu/chapters/s9206.pdf\">http://assets.press.princeton.edu/chapters/s9206.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A deeper Lock at the Omega-Ponit-Theory:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://psy-minds.com/omega-point-theory\">https://psy-minds.com/omega-point-theory</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/8c0a88d3d960793464599671bcd685f5.jpg\" alt=\"\" width=\"344\" height=\"258\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">What&rsquo;s our Idols:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Just the first three Question with which all the ancient Philosophy would begin:</p>\n<ol>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Question of the History about where we came from.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Question about that what would Being and that what wouldn&rsquo;t Being.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And the Question about the Future and the Question about the Sense of all.</p>\n</li>\n</ol>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The basic Question of Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Introduction to Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.roangelo.net/logwitt/first-question-philosophy.html\">https://www.roangelo.net/logwitt/first-question-philosophy.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Questions in a Philosophic View:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/questions/#QueAnsPre\">https://plato.stanford.edu/entries/questions/#QueAnsPre</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophical Questions:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://conversationstartersworld.com/philosophical-questions\">https://conversationstartersworld.com/philosophical-questions</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Where is the Beginning of all:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Introduction to the Book Genesis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.learnreligions.com/book-of-genesis-701143\">https://www.learnreligions.com/book-of-genesis-701143</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Web Page for the whole ( astronomic ) Space:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://courses.lumenlearning.com/astronomy\">https://courses.lumenlearning.com/astronomy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction in the Omega-Point Theory:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://micahredding.com/blog/omega-point-theory\">http://micahredding.com/blog/omega-point-theory</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Being and waht&rsquo;s Not.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction to Metaphysics:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://academyofideas.com/2013/06/introduction-to-metaphysics\">https://academyofideas.com/2013/06/introduction-to-metaphysics</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Metaphysics.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://whatismetaphysics.com/\">http://whatismetaphysics.com</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Aristotle Metaphysics:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://archive.org/stream/aristotelesmeta00arisgoog/aristotelesmeta00arisgoog_djvu.txt\">https://archive.org/stream/aristotelesmeta00arisgoog/aristotelesmeta00arisgoog_djvu.txt</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Sense of it all:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Albert Einsteins View of the World:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://genius.com/Albert-einstein-the-world-as-i-see-it-annotated\">https://genius.com/Albert-einstein-the-world-as-i-see-it-annotated</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Give the World a Sense:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://assets.press.princeton.edu/chapters/s9206.pdf\">http://assets.press.princeton.edu/chapters/s9206.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A deeper Lock at the Omega-Ponit-Theory:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://psy-minds.com/omega-point-theory\">https://psy-minds.com/omega-point-theory</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So let&rsquo;s make Answer&rsquo;s.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So we would begin with the first Question &ndash; The Question about the History.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So what&rsquo;s our Roots &ndash; what&rsquo;s our Predecessor:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We have there three Predecessor&rsquo;s:</p>\n<ol>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Karlheinz Stockhausen.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Iannis Xenakis</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">John Cage.</p>\n</li>\n</ol>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki\">https://commons.wikimedia.org/wiki</a><a href=\"https://commons.wikimedia.org/wiki/File:Josef_Tal_at_the_Electronic_Music_Studio.jpg\"> File:Josef_Tal_at_the_Electronic_Music_Studio.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/b0e81640da55727483c43787f72cadd8.jpg\" alt=\"\" width=\"344\" height=\"228\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Stockhausen:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Like Karlheinz Stockhausen we use Electronic to do our thing. But in a deeper Lock we are more specific at Electronic and wider as Electronic is at it self. So you can say our medium is the transcendence of Electronic. Which aka is the Idea of the Computer.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Live Electronic Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Studio for Electro-acustic Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.adk.de/en/academy/studio-for-electroacoustic-music/index.htm\">https://www.adk.de/en/academy/studio-for-electroacoustic-music/index.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Example for a live electric Music Performance:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://econtact.ca/10_4/bernal_pais_endphase.html\">https://econtact.ca/10_4/bernal_pais_endphase.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Re flexion on interactive Music Performance:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://econtact.ca/10_4/lindborg_interactivity.html\">https://econtact.ca/10_4/lindborg_interactivity.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Compositions and her Technique by Karlheinz Stockhausen:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Listen to Works of him:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.allmusic.com/artist/karlheinz-stockhausen-mn0000854925/compositions\">https://www.allmusic.com/artist/karlheinz-stockhausen-mn0000854925/compositions</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Overview of Composition Paradigm&rsquo;s by Karlheinz Stockhausen:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/pdf/833609.pdf?casa_token=l5Q4HXllmYgAAAAA:cOgS_ccqRA6Q-3bpgSb-Y66Obstb0B31JZX6JcZHh0vLWZMzvFwL0r_OosgRSmjd75oWHKMxxEEnbdtb8WpA0SGD1gu78qd14_798-l36GJZEvWZVkOf\">https://www.jstor.org/stable/pdf/833609.pdf?casa_token=l5Q4HXllmYgAAAAA:cOgS_ccqRA6Q-3bpgSb-Y66Obstb0B31JZX6JcZHh0vLWZMzvFwL0r_OosgRSmjd75oWHKMxxEEnbdtb8WpA0SGD1gu78qd14_798-l36GJZEvWZVkOf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music for a House by Karlheinz Stockhausen:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/pdf/3600925.pdf?casa_token=eJpGhmAE3KwAAAAA:EXQpKeTLJhBYj6Q9DlkwWubhwgEusEDe4res1IeFbudA62dLLLrSiHyyDfFKkVSvDz0_iV1Yd7u_ZBchf4yMJbgiH44H68vr89RAHo66xOKPoHS1fiM2\">https://www.jstor.org/stable/pdf/3600925.pdf?casa_token=eJpGhmAE3KwAAAAA:EXQpKeTLJhBYj6Q9DlkwWubhwgEusEDe4res1IeFbudA62dLLLrSiHyyDfFKkVSvDz0_iV1Yd7u_ZBchf4yMJbgiH44H68vr89RAHo66xOKPoHS1fiM2</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Componist_Karlheinz_Stockhausen_tijdens_repetitie_Michaels_Heimkehr_(in_opdracht,_Bestanddeelnr_930-8763.jpg\">https://commons.wikimedia.org/wiki/File:Componist_Karlheinz_Stockhausen_tijdens_repetitie_Michaels_Heimkehr_(in_opdracht,_Bestanddeelnr_930-8763.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/1aa1acf8a49a59e3d728ac5995e1d502.jpg\" alt=\"\" width=\"344\" height=\"469\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">We try to use the Computer as our Clavier or our Pencil, he is our medium. So this word Computer, are incorrect in because of this transcendence. The important point of Computer is to have a Thing who can do logical Operations. And do such Operations without the Operator has to intervene more than necessary. So a Quantum -Computer or a mechanical System or even a stupid Human Interpreter can do the work we need. But we should automate so many Aspects of a Work as Possible. Later we will see the Because of this. So we should left the focus from Human Systems. And move it from there to the Computer. We should let him as much Work as Possible.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Transcendent:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of Transcendent:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.merriam-webster.com/dictionary/transcendent\">https://www.merriam-webster.com/dictionary/transcendent</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Psychedelic Transcendent:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.psychedelic-library.org/lsdmenu.htm\">http://www.psychedelic-library.org/lsdmenu.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Kant&rsquo;s System of Perspective:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://staffweb.hkbu.edu.hk/ppp/ksp1\">http://staffweb.hkbu.edu.hk/ppp/ksp1</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Turing-Machine:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction of Turing-Machine:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html\">https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophic View on the Turing-Machine:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/turing-machine\">https://plato.stanford.edu/entries/turing-machine</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who was Alan Turing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://historysheroes.e2bn.org/hero/whowerethey/91\">http://historysheroes.e2bn.org/hero/whowerethey/91</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Transcend of Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s transcendental Functions:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.encyclopediaofmath.org/index.php/Transcendental_function\">https://www.encyclopediaofmath.org/index.php/Transcendental_function</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Computational Complexity of transcendental Functions:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://cstheory.stackexchange.com/questions/39922/relation-between-transcendental-numbers-and-computational-complexity\">https://cstheory.stackexchange.com/questions/39922/relation-between-transcendental-numbers-and-computational-complexity</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Solvable of transcendental Functions:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://math.stackexchange.com/questions/666449/are-transcendental-numbers-computable\">https://math.stackexchange.com/questions/666449/are-transcendental-numbers-computable</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Computational Slave:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Dictionary says about to Compute:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.dictionary.com/browse/computing\">https://www.dictionary.com/browse/computing</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia says to Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Computer\">https://en.wikipedia.org/wiki/Computer</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Warhol and the Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://computerhistory.org/blog/warhol-the-computer/?key=warhol-the-computer\">https://computerhistory.org/blog/warhol-the-computer/?key=warhol-the-computer</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Alan_Turing_Aged_16.jpg\">https://commons.wikimedia.org/wiki/File:Alan_Turing_Aged_16.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/379c64833c3576e2cbacd9aec0455239.jpg\" alt=\"\" width=\"344\" height=\"235\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But Why the Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We should do it with Computers, because he are stupid enough. He do what we Concrete which he would do. So at a Example of Advantage the computer are not in a labor-union. And he need not to get so much royalties. And he has no star allures. And he leaves us the glory and honor. He do simple what he should do. And as we will see later this Simple is a Point of the Reputation of &ldquo;New-Art&rdquo;.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Computer Slave:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Term Slave in Technology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.computerhope.com/jargon/s/slave.htm\">https://www.computerhope.com/jargon/s/slave.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Power of Computer in Architecture Design:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://architectureau.com/articles/the-computer-master-or-servant\">https://architectureau.com/articles/the-computer-master-or-servant</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article to the Servanty of Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://mcs.open.ac.uk/mj665/TechND91.pdf\">http://mcs.open.ac.uk/mj665/TechND91.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who are the artist is a Computer painting:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Can Computer do Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://web.archive.org/web/20120405081225/http://academic.evergreen.edu/curricular/thelens/docs/digital/vitality.pdf\">https://web.archive.org/web/20120405081225/http://academic.evergreen.edu/curricular/thelens/docs/digital/vitality.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Book about the Digital Culture:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://archive.org/details/digitalculture0000gere/page/n7\">https://archive.org/details/digitalculture0000gere/page/n7</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Copy-right Question:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1647584\">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1647584</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:SandersAssociates_-_graphic8_-_H-82-0176_Vistagraphic_3000_Graphic_8_Series_8000_Operation_and_Maintenance_Manual_Feb1983_(1919)_(14779640815).jpg\">https://commons.wikimedia.org/wiki/File:SandersAssociates_-_graphic8_-_H-82-0176_Vistagraphic_3000_Graphic_8_Series_8000_Operation_and_Maintenance_Manual_Feb1983_(1919)_(14779640815).jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/e9452d59ae90bec32298df4dcc8b1291.png\" alt=\"\" width=\"344\" height=\"258\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But the Use of Computers has more improtend Aspects:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">There is a Force to be formal in Thinking of the Program of a Computer.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">When you look at the first Lines of this Manifesto you will read about the force of Logical Structures. All what the Computer can do is to operate on and interpret Logical Structures. So when you would program a Computer you are damish to see the Logical Structure of your Ideas.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Formal Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction to Formal Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/formal-logic\">https://www.britannica.com/topic/formal-logic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Formal Logic in one Sentences:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.dictionary.com/browse/formal-logic\">https://www.dictionary.com/browse/formal-logic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Mathematical Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.encyclopediaofmath.org/index.php/Mathematical_logic\">https://www.encyclopediaofmath.org/index.php/Mathematical_logic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s a Programming Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Specification of a Programming Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Programming_language_specification\">https://en.wikipedia.org/wiki/Programming_language_specification</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article to specify a such Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.knosof.co.uk/vulnerabilities/langconform.pdf\">http://www.knosof.co.uk/vulnerabilities/langconform.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Specification of Lisp the Language Behind OpenMusic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.lispworks.com/documentation\">http://www.lispworks.com/documentation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s theoretical Informatics:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s theoretical Computer Science:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.computersciencedegreehub.com/faq/what-is-theoretical-computer-science\">https://www.computersciencedegreehub.com/faq/what-is-theoretical-computer-science</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Study it at MIT:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://math.mit.edu/research/applied/comp-science-theory.php\">https://math.mit.edu/research/applied/comp-science-theory.php</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Technopedia.com says:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.techopedia.com/definition/32863/theoretical-computer-science\">https://www.techopedia.com/definition/32863/theoretical-computer-science</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Logical_computations,_i.e._tasks,_partially_define_phenotypes,_and_phenotypic_matching_leads_to_ecological_interactions.svg\">https://commons.wikimedia.org/wiki/File:Logical_computations,_i.e._tasks,_partially_define_phenotypes,_and_phenotypic_matching_leads_to_ecological_interactions.svg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/6151242cabb1ebf5fb21e0de85ff8245.jpg\" alt=\"\" width=\"344\" height=\"275\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">And you must clearing your Thinking. This can be frustrating. It will be Frustrating when you Unable to see your own Face in the Mirror. So we could say to Programming a Computer is a first step in the Therapy of your Mind. So in the following, we would see what&rsquo;s that Therapy. But I promise it is the Future of Psychoanalysis. You will see &hellip; and Say that every Child should learn at School two things:</p>\n<ol>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophy &ndash; To Begin to Think by your self.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">To Program a Computer &ndash; To Understand the Logical Structure of Thinking &ndash; And to lock at your own Thinking in the Mirror.</p>\n</li>\n</ol>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.philosophybasics.com/general_whatis.html\">https://www.philosophybasics.com/general_whatis.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Short Page Introduction to Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophy.fsu.edu/undergraduate-study/why-philosophy/What-is-Philosophy\">https://philosophy.fsu.edu/undergraduate-study/why-philosophy/What-is-Philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Philosophic of Computational:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/computer-science\">https://plato.stanford.edu/entries/computer-science</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Art to Program Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to learn Coding:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://lifehacker.com/top-10-ways-to-teach-yourself-to-code-1684250889\">https://lifehacker.com/top-10-ways-to-teach-yourself-to-code-1684250889</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to learn Coding II:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://hackr.io/blog/how-to-learn-programming\">https://hackr.io/blog/how-to-learn-programming</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Demoscene &ndash; The Culture of Coding:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://demoscene-the-art-of-coding.net/\">http://demoscene-the-art-of-coding.net</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Challenge of AI ( aka artificial Intelligence ):</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Short Introduction to AI:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://becominghuman.ai/introduction-to-artificial-intelligence-5fba0148ec99\">https://becominghuman.ai/introduction-to-artificial-intelligence-5fba0148ec99</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Git-Hub Course for it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://materiaalit.github.io/intro-to-ai\">https://materiaalit.github.io/intro-to-ai</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">AI and Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://medium.com/design-ibm/the-role-of-art-in-ai-31033ad7c54e\">https://medium.com/design-ibm/the-role-of-art-in-ai-31033ad7c54e</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Artificial_Neural_Network_with_Chip.jpg\">https://commons.wikimedia.org/wiki/File:Artificial_Neural_Network_with_Chip.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lik</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/9fbd6a570a3f47e195a47f9324a46be0.jpg\" alt=\"\" width=\"344\" height=\"347\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">Like Iannis Xenakis</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:XenakisMDaniel_crop.jpg\">https://commons.wikimedia.org/wiki/File:XenakisMDaniel_crop.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Iannis Xenakis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is he:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://iannis-xenakis.org/xen/bio/biography.html\">https://iannis-xenakis.org/xen/bio/biography.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Iannis Xenakis on Spotify:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://open.spotify.com/artist/399s62PfwKlLnrLvBjWFYB\">https://open.spotify.com/artist/399s62PfwKlLnrLvBjWFYB</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">An Analysis of a Example Work:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/225618165_Iannis_XenakisArchitect_of_Light_and_Sound\">https://www.researchgate.net/publication/225618165_Iannis_XenakisArchitect_of_Light_and_Sound</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">we are mathematician Nerds.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But what&rsquo;s other is Mathematics as the Application of Philosophical Logic. ... As a Application to a Point it could be understand by Machines like by the Computer. And whats other is the essence of Informatics other than to build such Machines and Structures. And apply them to practical Problems of Live and Technology.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Math and Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Short Question:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophy.stackexchange.com/questions/1519/how-should-we-characterize-the-relationship-between-mathematics-and-philosophy-o\">https://philosophy.stackexchange.com/questions/1519/how-should-we-characterize-the-relationship-between-mathematics-and-philosophy-o</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And a Big Answer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/How-can-you-relate-Philosophy-and-Mathematics\">https://www.quora.com/How-can-you-relate-Philosophy-and-Mathematics</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Philosophy of Math:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/philosophy-mathematics/\">https://plato.stanford.edu/entries/philosophy-mathematics/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/cbdcff7dba3b55e814e5d7e10a9b5c48.png\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Answer to this Relation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/What-is-the-relationship-between-mathematics-and-computer-science\">https://www.quora.com/What-is-the-relationship-between-mathematics-and-computer-science</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why Math and Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.houseofbots.com/news-detail/3864-1-whats-the-main-importance-of-mathematics-in-computer-science\">https://www.houseofbots.com/news-detail/3864-1-whats-the-main-importance-of-mathematics-in-computer-science</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is Math for Computers:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/259669422_The_Roles_of_Mathematics_in_Computer_Science\">https://www.researchgate.net/publication/259669422_The_Roles_of_Mathematics_in_Computer_Science</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Wi-Fi_challenge_Pi_equation_in_microMathematics_Plus_2.15.6_on_Android_2.3.png\">https://commons.wikimedia.org/wiki/File:Wi-Fi_challenge_Pi_equation_in_microMathematics_Plus_2.15.6_on_Android_2.3.png</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/f9fe41131b861842a42c140470bbe89b.png\" alt=\"\" width=\"344\" height=\"297\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">And what&rsquo;s the Connection between Philosophic and his Logic &hellip; clear it is the Relation between a Set of Elements and the Relation&rsquo;s of this Elements. These are the Structure&rsquo;s. So &ndash; look above &ndash; it is the Second Point which you should learn at School. You should learn it to be a right Human. To be the Human which was chosen by Good as the keeper of his creation.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s philosophical Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophy and Logic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/How-is-philosophy-related-to-logic\">https://www.quora.com/How-is-philosophy-related-to-logic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">An Answer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophy.stackexchange.com/questions/16013/what-would-be-the-relation-between-logic-and-philosophy\">https://philosophy.stackexchange.com/questions/16013/what-would-be-the-relation-between-logic-and-philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And An rather Answer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophy.stackexchange.com/questions/36549/what-distinction-is-there-between-logic-philosophy-of-logic-and-philosophical-l\">https://philosophy.stackexchange.com/questions/36549/what-distinction-is-there-between-logic-philosophy-of-logic-and-philosophical-l</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Logic_Fleuron_N019131-1.png\">https://commons.wikimedia.org/wiki/File:Logic_Fleuron_N019131-1.png</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/9b7d03b29d3899318587d95578f4fd06.jpg\" alt=\"\" width=\"344\" height=\"267\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But what&rsquo;s the Relation between this and Art. The Connection between this and Music as a Part of Art. So new lets bee more sophistic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art is the Concretion of a Philosophical Idea. And why this &ndash; we would now see.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">This idea would be simpler to demonstrate at modern Art. But at every Time the Function of Art was to Tell or to Explain a Philosophical Idea.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Function of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Short Answer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thoughtco.com/what-are-the-functions-of-art-182414\">https://www.thoughtco.com/what-are-the-functions-of-art-182414</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Second Short Answer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/What-are-the-functions-and-uses-of-art\">https://www.quora.com/What-are-the-functions-and-uses-of-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Art on Wikipedia:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Art\">https://en.wikipedia.org/wiki/Art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Philosophy of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophy of Art.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.philosophy-of-art.com/\">https://www.philosophy-of-art.com</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Essay of Aesthetic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.iep.utm.edu/aestheti\">https://www.iep.utm.edu/aestheti</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Problems of Art Today:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.ditext.com/anka/beardsley/post.html\">http://www.ditext.com/anka/beardsley/post.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophy in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Abstract Ideas in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.pinterest.de/abstractoons/form-line-color/\">https://www.pinterest.de/abstractoons/form-line-color/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Functional Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.artspace.com/magazine/art_101/art_market/functional_art-51024\">https://www.artspace.com/magazine/art_101/art_market/functional_art-51024</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Propaganda in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.bbc.com/culture/story/20130703-can-propaganda-be-great-art\">http://www.bbc.com/culture/story/20130703-can-propaganda-be-great-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:%22The_School_of_Athens%22_by_Raffaello_Sanzio_da_Urbino.jpg\">https://commons.wikimedia.org/wiki/File:%22The_School_of_Athens%22_by_Raffaello_Sanzio_da_Urbino.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/66e93cf049fa0007ff57fdbccc969306.jpg\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">Look at the Picture of Amadeus Mozart. It not only shows the Face of Mozart. But more it will shows the Genie of Mozart. It will tell us about his Genie in the Design of this Picture. Design at this is the Elevation of a practical Entity by improve his functionality to a Maximum. Look at a Painting of Kandinsky and you are damish to see the Realtion&rsquo;s of Forms to another. You could see this System of Relations or Nothing. You are the first or second Consumer.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">This new kind of Act of a Arrangement is the new Experience in our modern Days. It is the Experience of Logic. It is the Experience that Philosophy is the Meat around the Bones of Logic. And that Logic can be a Motive of a Arrangement to Paint.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Design:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Definition of it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.strate.education/gallery/news/design-definition\">https://www.strate.education/gallery/news/design-definition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Google-Design:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://medium.com/google-design/the-meaning-of-design-44f1a82129a8\">https://medium.com/google-design/the-meaning-of-design-44f1a82129a8</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Principle of Design:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jidp.or.jp/en/about/firsttime/whatsdesign\">https://www.jidp.or.jp/en/about/firsttime/whatsdesign</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is Kandinsky:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is he:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.biography.com/artist/wassily-kandinsky\">https://www.biography.com/artist/wassily-kandinsky</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Example of Art of Kandinsky:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.wassilykandinsky.net/work-50.php\">https://www.wassilykandinsky.net/work-50.php</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://archive.org/stream/pointlinetoplane00kand/pointlinetoplane00kand_djvu.txt\">https://archive.org/stream/pointlinetoplane00kand/pointlinetoplane00kand_djvu.txt</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Logic in contemporary Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Example:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.mocak.pl/logical-emotion-contemporary-art-from-japan\">https://en.mocak.pl/logical-emotion-contemporary-art-from-japan</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article of Logic in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philarchive.org/archive/ZAAHTU\">https://philarchive.org/archive/ZAAHTU</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Structures by Zerro:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://briefeanfraublog.de/wp-content/uploads/2017/10/2013_the_sun_is_zero_poerschmann_dirk.pdf\">http://briefeanfraublog.de/wp-content/uploads/2017/10/2013_the_sun_is_zero_poerschmann_dirk.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Triangular_structures_at_Texas_A%26M_University_in_Qatar.jpg\">https://commons.wikimedia.org/wiki/File:Triangular_structures_at_Texas_A%26M_University_in_Qatar.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/e0af8d804a2ff3d25f98bacfcb605e37.jpg\" alt=\"\" width=\"344\" height=\"551\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So we have to explain that Art is the Process to Transfer a Philosophical Idea in a Concrete. But we can go a step future. We can show every Art is a Process of Execution of a Philosophical Idea. A Execution of this Idea through the Machinery of the so called Art. And of the Technique and the Theory, of the chosen Medium of Art.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Muses:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Muses as Motive of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://ancientrome.ru/art/artworken/result.htm?alt=Muses\">http://ancientrome.ru/art/artworken/result.htm?alt=Muses</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">More Muses:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://iconographic.warburg.sas.ac.uk/vpc/VPC_search/subcats.php?cat_1=5&amp;cat_2=115\">https://iconographic.warburg.sas.ac.uk/vpc/VPC_search/subcats.php?cat_1=5&amp;cat_2=115</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Theory of Muses:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theoi.com/Text/DiodorusSiculus4A.html#7\">https://www.theoi.com/Text/DiodorusSiculus4A.html#7</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Principle of Functional Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Paradigms in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/270314951_On_Paradigms_and_Revolutions_in_Science_and_Art_The_Challenge_of_Interpretation\">https://www.researchgate.net/publication/270314951_On_Paradigms_and_Revolutions_in_Science_and_Art_The_Challenge_of_Interpretation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Functional Art Specification:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://scottmanning.com/content/functional-design-specification/\">https://scottmanning.com/content/functional-design-specification/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Paradigms in Computational:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/241111987_Programming_Paradigms_for_Dummies_What_Every_Programmer_Should_Know\">https://www.researchgate.net/publication/241111987_Programming_Paradigms_for_Dummies_What_Every_Programmer_Should_Know</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Principle of Concept Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Concept Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.creativebloq.com/career/what-concept-art-11121155\">https://www.creativebloq.com/career/what-concept-art-11121155</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Studiopigeon.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.studiopigeon.com/blog/what-is-concept-art\">https://www.studiopigeon.com/blog/what-is-concept-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:King_Arthur_II_concept_art_4.jpg\">https://commons.wikimedia.org/wiki/File:King_Arthur_II_concept_art_4.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/6f15fb9669d947192d79a64d87c2f893.jpg\" alt=\"\" width=\"344\" height=\"138\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So it is even with Music. It&rsquo;s like other Arts. So lets show it by Music.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music is the Process to translate a Idea through Harmony and Melody. It is the translation of a Idea from the higher Speech ( the ultimate transcendental Speech ) of Philosophy, down in the Speech of Music. Also from the Speech of the Ideas down in the Speech of Emotion. From the ideas like in the meaning of Socrates down to the Material of Music. To a Structure of Event&rsquo;s in Time.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music as the Language of Emotion:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article of it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/257431972_Music_The_language_of_emotion\">https://www.researchgate.net/publication/257431972_Music_The_language_of_emotion</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music Meaning and Emotion:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/1559088?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/1559088?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music as a Universal Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.psychologytoday.com/us/blog/talking-apes/201507/is-music-universal-language\">https://www.psychologytoday.com/us/blog/talking-apes/201507/is-music-universal-language</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Concept of Harmony:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Short Introduction to Harmony:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://study.com/academy/lesson/what-is-harmony-in-music-definition-theory.html\">https://study.com/academy/lesson/what-is-harmony-in-music-definition-theory.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A more Complex Article about it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.liveabout.com/harmony-definition-2701631\">https://www.liveabout.com/harmony-definition-2701631</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Dictionary Entry about Harmony:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thefreedictionary.com/Harmony+(music\">https://www.thefreedictionary.com/Harmony+(music</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Concept of Melody:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Blog Entry of Melody:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blog.landr.com/melody\">https://blog.landr.com/melody</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A more Complex Article about Melody:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.liveabout.com/melody-definition-2701673\">https://www.liveabout.com/melody-definition-2701673</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Music Wikipedia:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.musipedia.org/\">https://www.musipedia.org</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Concept of Idea and Material by Socrates:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Ideas of Socrates:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://academyofideas.com/2013/04/the-ideas-of-socrates\">https://academyofideas.com/2013/04/the-ideas-of-socrates</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Theory of Forms I:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Theory_of_forms\">https://en.wikipedia.org/wiki/Theory_of_forms</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Theory of Forms II:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://faculty.washington.edu/smcohen/320/thforms.htm\">http://faculty.washington.edu/smcohen/320/thforms.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:5_part_4_voice_sectional_harmony_in_close.jpg\">https://commons.wikimedia.org/wiki/File:5_part_4_voice_sectional_harmony_in_close.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/fe75223bfad0c119cea0283cb0cf6928.jpg\" alt=\"\" width=\"344\" height=\"275\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So one thing should be clear. There can be no efficiency when the Artist would be concentrate of the Art Work, as it is. As it is in the unique and singleness in Time and Space.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And it is in the Semantic Relations of his Circumstances of his Being. We have rather to think in the principle of our Time. We have to think in the Principles of the so called Post-industrial society. The main Principle is the Principe of Automation.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Post-Industrial Society:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thoughtco.com/post-industrial-society-3026457\">https://www.thoughtco.com/post-industrial-society-3026457</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Shortened Definition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.encyclopedia.com/social-sciences-and-law/sociology-and-social-reform/sociology-general-terms-and-concepts/post\">https://www.encyclopedia.com/social-sciences-and-law/sociology-and-social-reform/sociology-general-terms-and-concepts/post</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">National Affairs:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://nationalaffairs.com/storage/app/uploads/public/58e/1a4/a2b/58e1a4a2b88ce619080580.pdf\">https://nationalaffairs.com/storage/app/uploads/public/58e/1a4/a2b/58e1a4a2b88ce619080580.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Factory of Andy Warhol:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art like Warhol:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.etsy.com/market/andy_warhol_fabric\">https://www.etsy.com/market/andy_warhol_fabric</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Museum of Andy Warhol:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.warhol.org/event/the-factory\">https://www.warhol.org/event/the-factory</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Factory as a Mecca:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.biography.com/news/andy-warhol-and-the-factory-20750995\">https://www.biography.com/news/andy-warhol-and-the-factory-20750995</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art in Time and Space:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Short Article to it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099019&amp;type=printable\">https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099019&amp;type=printable</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Ancient Theory of Art in Time and Space:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://santitafarella.wordpress.com/2012/12/16/time-and-space-or-poetry-and-art\">https://santitafarella.wordpress.com/2012/12/16/time-and-space-or-poetry-and-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A longer Article to it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.janerendell.co.uk/wp-content/uploads/2013/02/SpacePlaceSite-Critical-Spatial-Practice-prepublication-PDF.pdf\">http://www.janerendell.co.uk/wp-content/uploads/2013/02/SpacePlaceSite-Critical-Spatial-Practice-prepublication-PDF.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:House_of_Industry_712-718_Catharine_Street_Philadelphia_PA.jpg\">https://commons.wikimedia.org/wiki/File:House_of_Industry_712-718_Catharine_Street_Philadelphia_PA.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/dd3844df23025bdc086b0c70b5f5f6e3.jpg\" alt=\"\" width=\"344\" height=\"517\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But which Method is to use to follow this Pricipe &hellip; it is the Principe of the Program. The Artist is so told to write down his Imagination in Code. Code as the general Philosophical Idea in the Speech of the Machine. Into the Speech of a Mechanical Machine. Not more is the Machine the Damon like it is in the grandiose Movie Metropolis.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s is a Algorithm:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Algorithm\">https://en.wikipedia.org/wiki/Algorithm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">An Mathematical View on Algorithm:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.encyclopediaofmath.org/index.php/Algorithm\">https://www.encyclopediaofmath.org/index.php/Algorithm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Semantic Relations of the Term:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://curlie.org/Computers/Algorithms\">https://curlie.org/Computers/Algorithms</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Language of the Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Organization of such Languages:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.cs.umd.edu/class/spring2013/cmsc330/lectures/intro.pdf\">http://www.cs.umd.edu/class/spring2013/cmsc330/lectures/intro.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Natural Language Processing ( NLP ):</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Natural_language_processing\">https://en.wikipedia.org/wiki/Natural_language_processing</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Model to understand NLP:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC40721/pdf/pnas01500-0075.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC40721/pdf/pnas01500-0075.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Movie Metropolis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Photos and more:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.imdb.com/title/tt0017136\">https://www.imdb.com/title/tt0017136</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Facts about Metropolis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/Metropolis-film-1927\">https://www.britannica.com/topic/Metropolis-film-1927</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">more:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.rottentomatoes.com/m/1013775_metropolis\">https://www.rottentomatoes.com/m/1013775_metropolis</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Maria_from_metropolis.JPG\">https://commons.wikimedia.org/wiki/File:Maria_from_metropolis.JPG</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/10ca06719e95d995535e624a36cad0f3.jpg\" alt=\"\" width=\"344\" height=\"272\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">The Artist is damish to get clear about his Intentions. He has to go into his self. He has to see his own Face in this Mirror. He has to show the True about his Philosophic Sociological and Political Intentions of the Arrangement of his Work.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So to Program a Machine is the Way to look in the Mirror. In a Mirror like them of Eulenspiegel. So he are damish to see the Thru - or to fail. When he will Program the Computer.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophic Intentions of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Intentism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Intentism\">https://en.wikipedia.org/wiki/Intentism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art and his Interpretation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.iep.utm.edu/artinter\">https://www.iep.utm.edu/artinter</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Conceptual Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/conceptual-art\">https://plato.stanford.edu/entries/conceptual-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Sociological Intentions of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Sociological Theory of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://web.mit.edu/allanmc/www/bourdieu3.pdf\">http://web.mit.edu/allanmc/www/bourdieu3.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Second Sociological Theory of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.um.es/ESA/Abstracts/Abst_rn2.htm\">https://www.um.es/ESA/Abstracts/Abst_rn2.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Unesco Perception on Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://unesdoc.unesco.org/ark:/48223/pf0000024598\">https://unesdoc.unesco.org/ark:/48223/pf0000024598</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">political Intentions of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is Art political:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theeuropean.de/en/dickon-stone/8519-why-art-is-by-definition-political\">https://www.theeuropean.de/en/dickon-stone/8519-why-art-is-by-definition-political</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Theory of political Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://arthistoryteachingresources.org/lessons/art-and-political-commitment\">http://arthistoryteachingresources.org/lessons/art-and-political-commitment</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Explained Theory of political Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199935307.001.0001/oxfordhb-9780199935307-e-13\">https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199935307.001.0001/oxfordhb-9780199935307-e-13</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Map4.jpg\">https://commons.wikimedia.org/wiki/File:Map4.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/5b2d06768587ca054013d6ab11de069c.jpg\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So the Artist is also concentrate of the general Philosophic Ideas behind his Act. He is concentrate of the Boundary&rsquo;s of this Process. He has to translate his Ideas into his Work. He is the Worker at the Boundary&rsquo;s of Human Speech. At the sophisticated Human Speech called Art. But he is not a Translator for us. He is a Teacher of the Speech of Art, down to the Design of every day lives Acts. He has to train the Consumer to do this Design at his self. He has to teach the Method of Art. The Method of Art as a Toll. A Toll to self clarify him. To clarify his Ideas.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is Art a Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A first Answer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/Is-art-a-language\">https://www.quora.com/Is-art-a-language</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Journal of Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/pdf/2023126.pdf\">https://www.jstor.org/stable/pdf/2023126.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art is a Visual Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/20715892?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/20715892?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Boundary&rsquo;s of Semantic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Overview of Semantic Boundary&rsquo;s:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/figure/Syntactic-Semantic-and-Pragmatic-Boundaries-Identified-in-the-Data-and-Strategies-That_tbl5_274644413\">https://www.researchgate.net/figure/Syntactic-Semantic-and-Pragmatic-Boundaries-Identified-in-the-Data-and-Strategies-That_tbl5_274644413</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Boundary&rsquo;s in Syntactic Annotations:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.ilc.cnr.it/EAGLES96/segsasg1/segsasg1.html\">http://www.ilc.cnr.it/EAGLES96/segsasg1/segsasg1.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Theory of Syntactic Boundary&rsquo;s:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://pdfs.semanticscholar.org/583e/136c7db365ef253976e1902f889a9b861a81.pdf\">https://pdfs.semanticscholar.org/583e/136c7db365ef253976e1902f889a9b861a81.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Boundary&rsquo;s of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">From Oxford:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://academic.oup.com/bjaesthetics/article-abstract/30/3/266/141743?redirectedFrom=PDF\">https://academic.oup.com/bjaesthetics/article-abstract/30/3/266/141743?redirectedFrom=PDF</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to push them:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://frieze.com/article/pushing-boundaries-contemporary-art\">https://frieze.com/article/pushing-boundaries-contemporary-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Carrier by push them:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nytimes.com/2003/11/30/nyregion/a-career-built-on-exploring-the-boundaries-of-art.html\">https://www.nytimes.com/2003/11/30/nyregion/a-career-built-on-exploring-the-boundaries-of-art.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Australian_Center_of_Contemporary_Art_(ACCA)-_Side_2.JPG\">https://commons.wikimedia.org/wiki/File:Australian_Center_of_Contemporary_Art_(ACCA)-_Side_2.JPG</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/3491d39d832065eb0f26271ddb117e7f.png\" alt=\"\" width=\"344\" height=\"214\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">Now what&rsquo;s the Aspects under which a Act can be. In specific which are the Aspects of a Art Work. Let&rsquo;s look at the Aspects of unique Works. This Relations are:</p>\n<ol>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Time at a Work is done and consume</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Place at a Work is done and consume</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Semantic Relation a Work has &hellip;</p>\n</li>\n</ol>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Semantic of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Theory of Semantic of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/427490?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/427490?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art of Semantic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.semanticarts.com/\">https://www.semanticarts.com</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Form and Meaning of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://case.edu/artsci/cogs/larcs/documents/Formmeaningandart.pdf\">https://case.edu/artsci/cogs/larcs/documents/Formmeaningandart.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">This are the Relations a Work has as a Semen. As short in the Theory of Speech it gives a general accepted Statement. In the Linguistic every Word belongs to a meaning. It has this Meaning by the rules of the Game in which it is used. This is the so called Game of Speech. So we should not anymore disclaim that Art Works are like Words.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Whats are Games of Speech:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Great Language Game:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://greatlanguagegame.com/\">https://greatlanguagegame.com</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Wikipedia tells:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Language_game_(philosophy\">https://en.wikipedia.org/wiki/Language_game_(philosophy</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Plays by Wittgenstein:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.signosemio.com/wittgenstein/language-games.asp\">http://www.signosemio.com/wittgenstein/language-games.asp</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Whats Semantic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.merriam-webster.com/dictionary/semantic\">https://www.merriam-webster.com/dictionary/semantic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia tells:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Semantics\">https://en.wikipedia.org/wiki/Semantics</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Great Semantics Archive Net:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.semanticsarchive.net/\">https://www.semanticsarchive.net</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">--------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/aa9d1e017e6bf46ec35ac0e5d3299a15.png\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So it is important to make everybody prudent for the Circumstance which he gives his Semen. Which he lets a Work have has. Also under which a Work has his Being.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We couldn&rsquo;t not anymore respond only the Artist with this.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">It&rsquo;s would be like a medical practitioner who will be called to remove a constipation. He wouldn&rsquo;t do it by drain water in the Body of his Client. No practitioner will do this. But he gives him a medical drug.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is responded for the Content in Web 2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article about User generated Content:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.sciencedirect.com/topics/psychology/user-generated-content\">https://www.sciencedirect.com/topics/psychology/user-generated-content</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Law and web2.0 Conference:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.huygens.es/ebooks/IDTSeries3_WEB2.0.pdf\">http://www.huygens.es/ebooks/IDTSeries3_WEB2.0.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Understanding Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/3426898_Understanding_Web_20\">https://www.researchgate.net/publication/3426898_Understanding_Web_20</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Are the Programmers Respond for there Software:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Whats Questions in the Mind of a Programmer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blog.lelonek.me/the-art-of-asking-questions-for-developers-cd88351b9e87\">https://blog.lelonek.me/the-art-of-asking-questions-for-developers-cd88351b9e87</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who gets Programmers Answers:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.hongkiat.com/blog/programming-questions-websites\">https://www.hongkiat.com/blog/programming-questions-websites</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is tech Support important for Programmers:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://softwareengineering.stackexchange.com/questions/35819/should-software-engineers-also-act-as-tech-support\">https://softwareengineering.stackexchange.com/questions/35819/should-software-engineers-also-act-as-tech-support</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why works Medicine by Drugs:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How works Drugs into the Body:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://oklahoman.com/article/5557200/how-do-drugs-work-in-the-body\">https://oklahoman.com/article/5557200/how-do-drugs-work-in-the-body</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Closer Look at it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellhealth.com/how-drugs-work-in-your-body-1124115\">https://www.verywellhealth.com/how-drugs-work-in-your-body-1124115</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Closer Look II:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.livescience.com/45241-medicine-journey-through-body-nigms.html\">https://www.livescience.com/45241-medicine-journey-through-body-nigms.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Web2.0.PNG\">https://commons.wikimedia.org/wiki/File:Web2.0.PNG</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/ac1fd25cf4647e5fcf33253a1004200e.jpg\" alt=\"\" width=\"344\" height=\"468\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So this is the mainly failure of Psychoanalysis. The Analyst can&rsquo;t remove all the Engrams by Hand. He should rather train his Client to do it for his self. So the Client is to train to speech a Meta-Speech of his Mind. And has to trow him self out of the pit.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Psychoanalysis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Humor View to Psychoanalysis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.simplypsychology.org/psychoanalysis.html\">https://www.simplypsychology.org/psychoanalysis.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A neutral View:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.psychologytoday.com/us/blog/headshrinkers-guide-the-galaxy/201401/what-is-psychoanalysis\">https://www.psychologytoday.com/us/blog/headshrinkers-guide-the-galaxy/201401/what-is-psychoanalysis</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A web-Page about Psychoanalysis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.freudfile.org/psychoanalysis/index.html\">https://www.freudfile.org/psychoanalysis/index.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Meta-Levels in Psychoanalysis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Meta-Analysis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.students4bestevidence.net/blog/2016/12/02/meta-analysis-what-why-and-how\">https://www.students4bestevidence.net/blog/2016/12/02/meta-analysis-what-why-and-how</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Level&rsquo;s on Mind at Freud:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellmind.com/the-conscious-and-unconscious-mind-2795946\">https://www.verywellmind.com/the-conscious-and-unconscious-mind-2795946</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Meta-Level Theory at Freud:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/4106762?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/4106762?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s a Meta-Levels of Languages:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Definition of a Meta-Level for Knowledge Interchange:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://logic.stanford.edu/kif/metaknowledge.html\">http://logic.stanford.edu/kif/metaknowledge.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A natural Meta-Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://englishlive.ef.com/blog/language-lab/metalanguage-will-help-english\">https://englishlive.ef.com/blog/language-lab/metalanguage-will-help-english</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Theory of Levels of Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.uni-due.de/ELE/LevelsOfLanguage.pdf\">https://www.uni-due.de/ELE/LevelsOfLanguage.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Sigmund_Freud,_by_Max_Halberstadt_(cropped).jpg\">https://commons.wikimedia.org/wiki/File:Sigmund_Freud,_by_Max_Halberstadt_(cropped).jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/b57ec8c99b7cfe3a675bdc47fbe39f80.jpg\" alt=\"\" width=\"344\" height=\"258\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So what&rsquo;s new the Work of the Artist. He has to build the Theory of the Speech. He has to train his Consumer to use this Speech. He has to develop a System. A System which Boundary&rsquo;s a far enough to Express the Thinking. The Thinking as a Set of Ideas and Feelings must be express in this Speech.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Whats modern Psychoanalysis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Center for modern Psychoanalysis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cmps.edu/what-is-modern-psychoanalysis\">https://www.cmps.edu/what-is-modern-psychoanalysis</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Difference to Freudian:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://modernpsychoanalyst.com/differences-in-modern-freudian-psychoanalysis\">https://modernpsychoanalyst.com/differences-in-modern-freudian-psychoanalysis</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How dos it work:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/How-does-modern-psychoanalysis-work\">https://www.quora.com/How-does-modern-psychoanalysis-work</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Psychotherapy based on depth psychology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s depth Psychology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.pacifica.edu/about-pacifica/what-is-depth-psychology\">https://www.pacifica.edu/about-pacifica/what-is-depth-psychology</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How get you Benefits:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.betterhelp.com/advice/psychologists/how-can-depth-psychology-benefit-you\">https://www.betterhelp.com/advice/psychologists/how-can-depth-psychology-benefit-you</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The C.G. Jung Center:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cgjungcenter.org/clinical-services/what-is-depth-psychology\">https://www.cgjungcenter.org/clinical-services/what-is-depth-psychology</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>",
        "topics": [
            {
                "id": 96,
                "name": "Contemporary",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 390,
                "name": "Manifesto",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 389,
                "name": "New-art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17628,
            "forum_user": {
                "id": 17624,
                "user": 17628,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6389f37aeaee190f92e385b6a9b395f6?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "creco",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-manifesto-of-new-art",
        "pk": 642,
        "published": false,
        "publish_date": "2020-04-25T08:48:17.847411+02:00"
    },
    {
        "title": "Le paquet rr Max - Jonathan Pitkin",
        "description": "Un nouvel ensemble d'outils de composition permettant de générer des séquences de sons changeant progressivement ou aléatoirement à l'aide de messages MIDI.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Jonathan Pitkin<br /><a href=\"https://forum.ircam.fr/profile/JonathanPITKIN/\">Biographie</a></p>\r\n<p>rr est un nouveau paquet Max qui consiste en une petite collection de patchers con&ccedil;us pour faciliter la g&eacute;n&eacute;ration de s&eacute;quences de notes MIDI et de messages de changement de contr&ocirc;le caract&eacute;ris&eacute;s par divers degr&eacute;s de randomisation et/ou par des transitions entre des &eacute;tats de d&eacute;part et de fin d&eacute;finis.</p>\r\n<p>Les objets qui le composent peuvent &ecirc;tre utilis&eacute;s pour :&nbsp;</p>\r\n<ul>\r\n<li>G&eacute;n&eacute;rer rapidement et avec souplesse des s&eacute;quences d'acc&eacute;l&eacute;ration ou de d&eacute;c&eacute;l&eacute;ration (\"rampes\") de notes MIDI, ainsi que des messages de changement de contr&ocirc;le optionnels pour des transitions graduelles simultan&eacute;es d'un &eacute;tat de param&egrave;tre &agrave; un autre.</li>\r\n<li>G&eacute;n&eacute;rer des S&Eacute;QUENCES de rampes d'acc&eacute;l&eacute;ration ou de d&eacute;c&eacute;l&eacute;ration de notes MIDI et de messages de changement de contr&ocirc;le, en faisant varier leurs caract&eacute;ristiques de mani&egrave;re al&eacute;atoire dans des limites d&eacute;finies.</li>\r\n<li>G&eacute;n&eacute;rer continuellement des notes MIDI et des messages de changement de contr&ocirc;le optionnels de mani&egrave;re al&eacute;atoire, dans des limites d&eacute;finies, avec des options permettant de \"verrouiller\" les param&egrave;tres afin qu'ils varient proportionnellement les uns par rapport aux autres.</li>\r\n</ul>\r\n<p>Voici une <a href=\"https://www.youtube.com/embed/6gcH7r9mMV8?si=O1YJI14TW28e4chl\">introduction/d&eacute;monstration rapide</a>&nbsp;</p>\r\n<p>Vous pourrez d&eacute;couvrir le paquet rr lors des ateliers du Forum IRCAM du 30e anniversaire, le 21 mars 2024.</p>\r\n<p><a href=\"http://www.JPitkin.co.uk\">www.JPitkin.co.uk</a></p>\r\n<p><a href=\"http://www.jpitkin.co.uk/Tools_software.html&nbsp;\">www.jpitkin.co.uk/Tools_software.html&nbsp;</a></p>\r\n<p><img src=\"/media/uploads/jp_side3_-_jonathan_pitkin.jpg\" alt=\"\" width=\"320\" height=\"192\" /></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 428,
                "name": "Algorithmic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 74,
                "name": "Midi",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1735,
                "name": "Randomization",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 88,
                "name": "Rhythm",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 6368,
            "forum_user": {
                "id": 6365,
                "user": 6368,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/JP_side3_small_square.jpeg",
                "avatar_url": "/media/cache/2a/9c/2a9cc25f5314a68a5f9ca48830203669.jpg",
                "biography": "Jonathan Pitkin is a British composer whose music increasingly involves the use of new technology, whether in the production of sound or in the reconfiguration and expansion of familiar instruments, made to behave in unexpected ways which suggest that they may have minds of their own. He works around the edges of popular and classical, performance and installation, and liveness and automation.\n\nJonathan's work has featured at the Huddersfield, Spitalfields and New York City Electroacoustic Music Festivals, the IRCAM Forum Ateliers and the CIME General Assembly. His output includes works for Disklavier, Magnetic Resonator Piano, circular piano, and singing synthesizer, as well as installations, emulations, pedagogical software and composers' tools. His published writings include contributions to the proceedings of NIME and the ICMC, and edited volumes published by SAGE and Routledge.\n\nJonathan teaches Composition and Academic studies at the Royal College of Music, London.",
                "date_modified": "2026-02-03T13:23:21.199106+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "JonathanPITKIN",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-rr-max-package",
        "pk": 2716,
        "published": true,
        "publish_date": "2024-02-08T18:03:26+01:00"
    },
    {
        "title": "Algorithmic Composition Techniques for Ancient Chinese Music Restoration and Reproduction-A Melody Generator Approach by Tiange Zhou & Marco Bidin",
        "description": "This research explores the use of algorithmic composition and deep learning method to generate melodies based on the traditional East Asian \"Baban\" form. It aims to bridge traditional and modern practices, and promotes a deeper understanding of ancient musical heritage through computational techniques.",
        "content": "<p>In contemporary musicology, the interplay between traditional musical forms and modern compositional techniques has garnered increasing interest. This presentation explores algorithmic composition techniques specifically developed to generate melodic elements inspired by the ancient East Asian musical form known as \"Baban.\"</p>\r\n<div>\r\n<p>Through the lens of this research, I highlight the tendency towards oversimplification in representations of East Asian musical traditions, emphasizing a prevalent lack of exposure among contemporary musi- cians to the rich and diverse heritage of ancient East Asian music. My work seeks to bridge the chasm between traditional and contempo- rary musical practices, aiming to enhance understanding and appreci- ation of this particular musical heritage.</p>\r\n<p>Central to this endeavor is a model designed to generate melodic continuations rooted in the \"Baban\" form, accompanied by a compre- hensive analysis of pieces created using this model. The implementa- tion of Computer Aided Composition in this context demonstrates its potential to illuminate the complexities of ancient East Asian music, thereby underscoring its cultural significance and educational value. Moreover, the research delves into the integration of Artificial Intelligence technologies, such as deep learning and neural networks, into the study and practice of traditional East Asian music.</p>\r\n<p>By harnessing these advanced computational techniques, this study not only contributes to the creation of innovative compositions grounded in traditional theoretical frameworks but also encourages a reevaluation of how contemporary musicians engage with and reinterpret ancient musical forms. Ultimately, this research aspires to cultivate a deeper appreciation for the intricate tapestry of EastAsian musical heritage, fostering a dialogue between the past and the present in the realm of musical composition.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6ea3fd98cb697b8f05569bbcdf60b9b0.jpg\" /></p>\r\n</div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3507f7f45d14e4f433fc36d8681e5a0e.png\" /></p>",
        "topics": [
            {
                "id": 1758,
                "name": "algorithmic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 670,
                "name": "Deep learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 91,
                "name": "Music theory",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 21145,
            "forum_user": {
                "id": 21134,
                "user": 21145,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/tiange_zhou_picture.jpg",
                "avatar_url": "/media/cache/b9/2e/b92e9acb7d31a2c8a504750e31223ddc.jpg",
                "biography": "Dr. Tiange Zhou is the director of the Art & Technology Lab at the\nSchool of Future Design, Beijing Normal University. She is a\ncomposer, interdisciplinary artist, and researcher. She earned\nher Bachelor's degree from the Manhattan School of Music,\nfollowed by a Master's degree from Yale University, and a Ph.D.\nfrom UCSD. Her works have received recognition, including the\nAmerican Filmatic Arts Awards for Best Sound Design in Short\nFilms, First Prize at the Kirkos Kammer International Chamber\nMusic Composition Competition, and a Gold Winner of the\nHermes Creative Awards. She has served as a course lecturer\nand collaborative artist at Yale College, the University of\nCalifornia, San Diego, and the Harvard Chinese Art Media Lab\n(CAM Lab) before relocating to China. Her research has been\npublished by IEEE-ICME, IRCAM FORUM, SIGGRAPH Asia,\nand CRC Press of Taylor & Francis Group.",
                "date_modified": "2025-03-29T05:41:42.124253+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 992,
                        "forum_user": 21134,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "tiangezhou",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 21145,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "algorithmic-composition-techniques-for-ancient-chinese-music-restoration-and-reproduction-a-melody-generator-approach",
        "pk": 3038,
        "published": true,
        "publish_date": "2024-10-20T04:03:21+02:00"
    },
    {
        "title": "Tutoriel Modalys n°9 : 3D Magic",
        "description": "Neuvième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p style=\"text-align: justify;\"><strong>Dans ce tutoriel, nous explorons les possibilit&eacute;s de fabriquer des objets en 3D dans Modalys et finissons par l'incliner.</strong></p>\r\n<p style=\"text-align: justify;\"><strong></strong></p>\r\n<p style=\"text-align: justify;\">Cach&eacute; dans le dossier d'aide de Modalys de votre installation se trouve un vieux manuscrit appel&eacute; &eacute;l&eacute;ments finis. Il contient la cl&eacute; pour fabriquer des objets en 3D dans Modalys. Dans ce tutoriel, nous allons faire simple et faire un arc, lui donner notre mat&eacute;riel de choix, qui deviendra de l'or. Bien qu'il soit impossible de r&eacute;aliser des formes complexes en 3D avec des bosses, etc. dans Modalys, c'est quand m&ecirc;me l'une des fonctions les plus puissantes. Par exemple, faire un gong &eacute;trange de 50 m&egrave;tres de diam&egrave;tre en uranium...</p>\r\n<p style=\"text-align: justify;\"><br />Nous avons surmont&eacute; quelques obstacles de documentation pour Modalisp et OpenMusic et &agrave; la toute fin, nous avons examin&eacute; les patchs qui fonctionnent (et parfois pas) ;-).</p>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/YE3_cbqLesE\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: left;\"><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.&nbsp;</strong></p>",
        "topics": [
            {
                "id": 194,
                "name": "3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n9-3d-magic",
        "pk": 731,
        "published": true,
        "publish_date": "2020-11-10T10:00:00+01:00"
    },
    {
        "title": "Fragmented City - Ashima Pargal",
        "description": "Un projet conceptuel de création sonore narrative présentant une perspective sémiotique sur l'impact des changements de nom sur l'identité des villes.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par:<span>&nbsp;</span>Ashima Pargal<br /><a href=\"https://forum.ircam.fr/profile/ashima/\">Biographie</a></p>\r\n<p></p>\r\n<p>Fragmented city est un projet conceptuel de design sonore bas&eacute; sur la narration qui explore l'impact des changements de nom d'une ville sur son identit&eacute;.</p>\r\n<p>Chaque nom de ville s'accompagne de son propre ensemble de significations, de symboles et de connotations qui s'int&egrave;grent dans la culture et y sont associ&eacute;s. Un changement de nom modifie l'origine de ces r&eacute;f&eacute;rences et rompt les liens qui l'unissaient &agrave; son histoire. S'il cr&eacute;e un espace pour l'&eacute;mergence de nouvelles histoires, il laisse &eacute;galement derri&egrave;re lui des parties importantes du patrimoine culturel immat&eacute;riel, car dans la plupart des cas, les noms pr&eacute;c&eacute;dents ne sont pas toujours document&eacute;s de mani&egrave;re appropri&eacute;e.</p>\r\n<p>Inspir&eacute;e par les cinq noms sous lesquels Chhatrapati Sambhajinagar, une ville de l'Inde, a &eacute;t&eacute; connue au cours des 11 derniers si&egrave;cles et par les inspirations derri&egrave;re leur nomenclature, cette pi&egrave;ce cherche &agrave; &eacute;baucher une exp&eacute;rience po&eacute;tique r&eacute;fl&eacute;chissant &agrave; la signification s&eacute;miotique plus large de la cr&eacute;ation des noms et de leur transformation au fil du temps. Les diverses influences &agrave; l'origine des diff&eacute;rents noms de cette ville, comme un lac, un terrain rocailleux et des monarques, sont li&eacute;es par un ensemble de battements de tambour qui &eacute;voquent le changement dans le son des battements de c&oelig;ur de la ville &agrave; mesure qu'elle se synchronise sur de nouveaux chapitres.</p>\r\n<p>&nbsp;</p>\r\n<p><em>Ashima poursuit une ma&icirc;trise en direction num&eacute;rique au Royal College of Arts. Elle s'int&eacute;resse &agrave; la narration intersectionnelle et explore souvent les aspects de l'identit&eacute;, de la culture, de la repr&eacute;sentation, des paysages urbains, de la lumi&egrave;re et de la s&eacute;miotique dans ses projets.</em></p>\r\n<p></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1826,
                "name": " audiovisual",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1909,
                "name": " Conceptual Sound design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1890,
                "name": "Immersive narrative",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1837,
                "name": "ircamforum",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1903,
                "name": "IRCAM Forum Workshops 2024",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1908,
                "name": "Narrative ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1858,
                "name": "rca",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1859,
                "name": "royalcollegeofart",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55141,
            "forum_user": {
                "id": 55078,
                "user": 55141,
                "first_name": "Ashima",
                "last_name": "Pargal",
                "avatar": "https://forum.ircam.fr/media/avatars/ashima_pargal.jpg",
                "avatar_url": "/media/cache/dc/a3/dca344598c6ddf4f9c6b32d8ac38c955.jpg",
                "biography": "Ashima is pursuing MA Digital Direction from the Royal College of Arts. She is interested in intersectional storytelling and often explores aspects of identity, culture, representation, urban landscapes and semiotics in her projects.",
                "date_modified": "2024-03-19T15:31:45.997801+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ashima",
            "first_name": "Ashima",
            "last_name": "Pargal",
            "bookmarks": []
        },
        "slug": "fragmented-city-ashima-pargal",
        "pk": 2837,
        "published": true,
        "publish_date": "2024-03-17T20:44:43+01:00"
    },
    {
        "title": "Panoramix with Reaper",
        "description": "Help for beginner",
        "content": "<p>Hi alls</p>\n<p>I am a beginner with Panoramix</p>\n<p>My project would be to run Panoramix with Reaper as inputs /outputs&nbsp; and working ambisonics .</p>\n<p>Configuration :Windows 10 + Reaper 6 + Fireface UC with Totalmix</p>\n<p>Just now i succeed only to use the file player as source of sounds</p>\n<p>Panoramix seems not&nbsp; to communicate whith Reaper</p>\n<p>How can i interface both ?</p>\n<p>Is it also possible to open several file players to fetch differents sounds to tracks</p>\n<p>Thanks for help</p>",
        "topics": [],
        "user": {
            "pk": 25641,
            "forum_user": {
                "id": 25614,
                "user": 25641,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9390cd943be1529de37215dbf771875d?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sicaudjl",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "panoramix-with-reaper",
        "pk": 2185,
        "published": false,
        "publish_date": "2023-04-06T14:55:02.658848+02:00"
    },
    {
        "title": "MDIF - a composer-in-the-Loop MLP by Anders Vinjar",
        "description": "This demo presents an easy-to-use practical composer-in-the-loop machine learning system.\r\n\r\nIt can learn a composer's subjective preferences from a number of musical examples and turns them into immediate, interactive control, as part of a typical composers workflow.\r\n\r\nThe main focus is musical structure at a symbolic level (notes, rhythms, register, texture).",
        "content": "<h5 id=\"➡️-this-presentation-is-part-of-ircam-forum-workshops-paris-engh\"><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<p><strong></strong></p>\r\n<p><strong>MDIF - Musical Descriptor Intuition Field</strong></p>\r\n<p>This demo presents an easy-to-use practical composer-in-the-loop machine learning system.<br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2ee65365b4b26290f43b729969774dc2.png\" /></p>\r\n<p>It can learn a composers subjective preferences from a number of musical examples and turns them into immediate, interactive control, as part of a typical composers workflow.</p>\r\n<p>The main focus is musical structure at a symbolic level (notes, rhythms, register, texture).</p>\r\n<ul>\r\n<li><strong>MDIF - example creative workflow </strong></li>\r\n</ul>\r\n<p style=\"padding-left: 30px;\"><strong>A typical composition workflow in OpenMusic using this system:</strong></p>\r\n<ul>\r\n<li>\r\n<ol>\r\n<li>setting up some patch generating variants of a structure</li>\r\n<li>generate a handful of candidates</li>\r\n<li>place them interactively within your personal subjective cognitive space in a GUI,</li>\r\n<li>train the MLP</li>\r\n<li>immediately use the trained model to generate, compare, edit, filter, interpolate new \"similar\" structure</li>\r\n</ol>\r\n</li>\r\n<li><strong>Easy and intuitive control of complex algorithms</strong>\r\n<ul>\r\n<li>What the MLP buys you is a simple and intuitive handle on what otherwise can be complex and chaotic: musically meaningful changes that emerge from awkward, hard-to-steer combinations of parameters, typical for the OM composer. <br /><br /><br />Instead of controlling dozens of algorithmic parameters directly, you interact with a learned notion of degrees of subjective similarity between your own samples (or class membership), where \"similar\" means \"similar in a subjective artistic sense\", not by grammars or rules.&nbsp; The manouvering in the trained model can be done interactively using the provided GUI - a 2D 'joystick', a set of sliders or a radar-plot - or searching and filtering in OM, or perhaps by other means.<br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/ccd1bba910609497b94c1479d938fe2c.png\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a12a973b645985b51d44eadc2fb2961b.png\" /></li>\r\n</ul>\r\n</li>\r\n<li><em>\"MDIF - Music Descriptor Intuition Field\"</em> - ad-hoc, composer-defined, personal, phenomenological descriptors<br />User-defined ad-hoc subjective descriptors, meaningful in the personal creative context.&nbsp; E.g. <em>\"entropy\", \"complexity\", \"texture\", \"thickness\", \"sharpness\"...</em><br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5128cba7f32712efe8fdb91bd9727b3f.png\" /></li>\r\n<li><strong>Architecture</strong>\r\n<ul>\r\n<li>The system is programmed as a portable Python core for GUI/interaction/MLP/training/inference, and some wrappers for OpenMusic in Common Lisp. The result is an interactive and intuitive ML-system that stays small and editable.</li>\r\n<li>The ML-system is general, and can be useful in all sorts of compositional tasks. Probably useful in other domains as well</li>\r\n<li>This demo at the FORUM will show an integration with <em>OpenMusic</em>.</li>\r\n</ul>\r\n</li>\r\n<li><strong>Keywords</strong>:\r\n<ul>\r\n<li>composer-in-the-loop interactive ML, MLP, few-shot/ad-hoc ML training, perceptual distance, classification, OpenMusic, Python/Common Lisp integration, phenomenological descriptors</li>\r\n</ul>\r\n</li>\r\n</ul>",
        "topics": [
            {
                "id": 3462,
                "name": "AI & Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1758,
                "name": "algorithmic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4279,
                "name": "Common Lisp",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 400,
                "name": "Interactive machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4280,
                "name": "Phenomenological Descriptors",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 53,
                "name": "Python",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4278,
                "name": "Symbolic Composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 56,
            "forum_user": {
                "id": 56,
                "user": 56,
                "first_name": "Anders",
                "last_name": "Vinjar",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5ab37b8e835fa6b42977195ce3e953b1?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-02T17:10:00.098551+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 10,
                        "forum_user": 56,
                        "date_start": "2023-01-20",
                        "date_end": "2024-01-20",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "anders",
            "first_name": "Anders",
            "last_name": "Vinjar",
            "bookmarks": []
        },
        "slug": "mdif-a-composer-in-the-loop-mlp-by-anders-vinjar",
        "pk": 4394,
        "published": true,
        "publish_date": "2026-02-19T10:45:02+01:00"
    },
    {
        "title": "FLUX",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p><span>FLUX is an immersive spatial audio composition designed for IRCAM&rsquo;s 6 channel speaker setup. The work explores the relationship between rivers, cities and people, illustrating commonalities and differences of the perception of rivers across the world. Utilising recordings of a range of different people speaking about their personal experiences with rivers, FLUX brings attention to the significance of rivers in our memories, daily lives, and communities. &nbsp;</span></p>\r\n<p><span>The use of spatial audio allows the audience to experience a sense of geographical distance in a physical environment and illustrates the interconnectedness of bodies of water. </span></p>\r\n<p><strong><br /><br /></strong></p>",
        "topics": [
            {
                "id": 1211,
                "name": "narrative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32945,
            "forum_user": {
                "id": 32897,
                "user": 32945,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6a1339760950a519a0910c128edfbbef?s=120&d=retro",
                "biography": "Ojasvani Dahiya is exploring creating interactive and immersive experiences that look at realities of the distant past and the far future which are grounded in the present. She is currently experimenting with new and emerging forms of technology to create visual experiences informed through sound and music. Her areas of interest are post-coloniality, identity, dreams and altered states of consciousness. Ojasvani graduated from Emerson College, Boston (2020) with a BFA in Media Arts Production, and went on to work in the Film/TV post-production industry in Los Angeles. She is currently on the Digital Direction MA program at the Royal College of Art.",
                "date_modified": "2023-11-06T21:49:51.196641+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "odahiya",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "flux-4",
        "pk": 2164,
        "published": true,
        "publish_date": "2023-03-25T15:37:06+01:00"
    },
    {
        "title": "\"rosebud\": Working with motion sensor data in audio and video post-production by Matthias Krüger",
        "description": "\"rosebud\" exemplifies an approach to hybrid composition methodologies: As a transdisciplinary composition between dance and music for dancer, sensors and live-electronics, existing as both a live performance and a video clip, it navigates between aural and visual layers, as well as the notions of liveness and fixed media/virtuality.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p></p>\r\n<p>Presented by : Matthias Kr&uuml;ger</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/maennis1112/\">Biography</a></p>\r\n<p>&nbsp;</p>\r\n<p>The presentation deals with the<span>&nbsp;</span><strong>methodology of composing with motion sensors and crossing the bridge to video production, as well as the aesthetic implications and potentials thereof.</strong></p>\r\n<p>Based on my piece<span>&nbsp;</span><strong>&laquo;&nbsp;rosebud&nbsp;&raquo; for dancer, sensors and live-electronics,</strong><span>&nbsp;</span>composed for the final concert of the 2021/22 Cursus (dance/choreography: Victor Virnot), which used the<span>&nbsp;</span><strong>Bitalio/R-ioT IMU sensors (later the NGIMU by x-io Technologies)</strong>, I developed a<span>&nbsp;</span><strong>new videoclip in which the motion sensor data is directly translated to digital VFX.</strong></p>\r\n<p style=\"text-align: center;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/be51acd78f18486a9a4c4e49e5c1a45d.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" />&nbsp;</p>\r\n<p>This project follows my general concern with<span>&nbsp;</span><strong>\"hybrid composition\"</strong>, which not only means an<strong><span>&nbsp;</span>transdisciplinary approach on a performance level</strong>, but also the<strong><span>&nbsp;</span>total imbrication and structural interdependence between each layer:<span>&nbsp;</span></strong>Just like a piece of instrumental acoustic music cannot exist without the motions of the instrumentalist, here as well the music cannot be shaped in the desired way without shaping the motions required to produce that very same shape. So<span>&nbsp;</span><strong>neither can exist without the other; the choreography becomes the score, and vice versa.</strong></p>\r\n<p>But it also means this: The<span>&nbsp;</span><strong>piece doesn't need to end with its live performance</strong>, but<span>&nbsp;</span><strong>can also exist as an alternative non-live version.</strong></p>\r\n<p>Initially thought from a<span>&nbsp;</span><strong>sustainibility and longevity perspective</strong><span>&nbsp;</span>of a \"work of performance art\", making sure future viewings of the piece beyond the rare<span>&nbsp;</span><em>physical</em><span>&nbsp;</span>performance opportunities, it entails a certain openness of the finished form of the piece, making it<span>&nbsp;</span><strong>co-exist in several equivalent versions</strong>, for example<span>&nbsp;</span><strong>a live performance and a video version</strong>, leaving the question unanswered which one is the original version.</p>\r\n<p>On the process level, this openness may also<span>&nbsp;</span><strong>create an iterative and reciprocal relationship between certain compositional decisions:</strong><span>&nbsp;</span>Whereas choreographical needs may, for example, impose a certain temporal structure to the music, the timing, certain camera perspectives, lighting for filming, or post-production decisions (VFX, color grading) may impact the interpretation and mise-en-scene of a subsequent live performance.</p>\r\n<p>So:<span>&nbsp;</span><strong>What is the piece? The live performance? Or the video clip? And is the video clip a representation of the live performance?</strong></p>\r\n<p><strong>&nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/bb3e87ced93245da6e609ba2c088c02c.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></strong></p>\r\n<p style=\"text-align: center;\">❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎&nbsp;</p>\r\n<p>With his in mind, the basic concept meant above all to<span>&nbsp;</span><strong>conserve the specific physical aura of the live dance performance<span>&nbsp;</span></strong>in an audio/video recording. The path chosen for this to<span>&nbsp;</span><strong>recreate the relationship between the dancer and the dominant medium</strong>: Just as he controls the music live and in real-time through his movements during the performance, his movements will also<strong><span>&nbsp;</span>control the exact shape and inherent energy of the video picture. For this, a limited set of VFX was chosen in order to distort and enhance the video image.</strong></p>\r\n<p>The original live version, premiered in September 2022 at Centre Pompidou in Paris, already featured a<span>&nbsp;</span><strong>max/msp patch</strong><span>&nbsp;</span>which<strong><span>&nbsp;</span>mapped specific motion and contact sensor parameters onto musical and compositional parameters</strong>.</p>\r\n<p>In a<span>&nbsp;</span><strong>revision, rehearsal and filming residency at GMEM in Marseille in January 2024</strong>, the same setup was kept whilst the music and choreography compositionally developed; then we not only<span>&nbsp;</span><strong>recorded the video footage for the videoclip (photography by Zo&euml; Schreckenberg), but also the sensor data<span>&nbsp;</span></strong>along with the audio.</p>\r\n<p>This had<span>&nbsp;</span><strong>two purposes</strong>:</p>\r\n<ul>\r\n<li>Having precisely matched audio cues to<span>&nbsp;</span><strong>create an audio track exactly synchronized with the images</strong>\r\n<ul>\r\n<li>the patch would be run remotely and as to record/export one single audio file<span>&nbsp;</span><em>a posteriori</em><span>&nbsp;</span>in order to have an audio track without any edits\r\n<ul>\r\n<li>This worked particularly well in \"rosebud\" because there is no audio input (no voice, no instruments); all sounds originate directly in the computer (different kinds of synthesis, FX, etc.).&nbsp;</li>\r\n<li>Needless to say that this audio track is based on a<strong><span>&nbsp;</span>performance that&mdash;in that exact shape and form&mdash;has never actually taken place</strong>, but is rather like a \"<strong>Frankenstein\"-like assemblage of data sets</strong><span>&nbsp;</span>creating a performance that resembles one that<span>&nbsp;</span><em>could have taken place<span>&nbsp;</span></em>and is, in fact, already being simulated by the illusion that is the video edit.</li>\r\n</ul>\r\n</li>\r\n</ul>\r\n</li>\r\n<li>Having matching data streams mirroring the movements in the video, in order to<span>&nbsp;</span><strong>map those to digital VFX processing the video images</strong><span>&nbsp;</span>in post-production,<span>&nbsp;</span><strong>translating the kinetic energy of the live performance to the video<span>&nbsp;</span></strong>and thus<span>&nbsp;</span><strong>transcending a mere video representation</strong><span>&nbsp;</span>of the live performance.</li>\r\n</ul>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8d343109deb75c1090005b9249f81dec.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Basing myself on the video edit, done according to<span>&nbsp;</span><strong>visual and videographical premises such as consistency and continuity, as well as general aesthetic choices</strong>, and loosely based on the audio recording of one of several run-throughs, I subsequently had to<span>&nbsp;</span><strong>match up the sensor data to the thusly assembled video takes.</strong></p>\r\n<p>Whilst in the live performance the motion sensor data is formatted in<span>&nbsp;</span><strong>OSC</strong>, it had to simulatenously be output during the recording session as<span>&nbsp;</span><strong>MIDI<span>&nbsp;</span></strong>in order to oobtain a<strong><span>&nbsp;</span>controllable and editable visual representation of the data<span>&nbsp;</span></strong>as well as a format in which it is can be loaded into in the<span>&nbsp;</span><strong>same editing environment as audio and video (such as a DAW)</strong>.<span>&nbsp;</span><strong>Transitions between different MIDI takes had to be smoothed manually</strong><span>&nbsp;</span>in order to create the illusion of continuity for which our eyes are more forgiving than our ears.</p>\r\n<p>Together with the<span>&nbsp;</span><strong>Toronto-based multimedia artist Michaelias Pichlkastner<span>&nbsp;</span></strong>was&nbsp;then devised a methodology to<span>&nbsp;</span><strong>apply this data to a selection of VFX</strong>, creating the aforementioned conceptual correspondence between the music and video layers. Using the<span>&nbsp;</span><strong>software/programming environment VVVV</strong>&mdash;processing data like max/msp in real-time&mdash;, we could<strong><span>&nbsp;</span>select and</strong><span>&nbsp;</span><strong>map specific MIDI events</strong><span>&nbsp;</span>(representing the sensor data)<span>&nbsp;</span><strong>and have them control the VFX (playing back both video source and MIDI file synchronously).</strong></p>\r\n<p><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b92e3b237f48b5387159b4c66d5ac8a3.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" />&nbsp;</strong></p>\r\n<p>Furthermore, there was one<span>&nbsp;</span><strong>unexpected, yet postitive side effect</strong><span>&nbsp;</span>that turned out to be of utmost importance:</p>\r\n<ul>\r\n<li>Having data input, practically a<span>&nbsp;</span><strong>\"virtual avatar\" of the dancer</strong>, enabled me to<span>&nbsp;</span><strong>perfect the piece, debug the patch and mix it all</strong><span>&nbsp;</span>without needing a live performer to reiterate the movements<span>&nbsp;</span><strong>necessary for trials and testing</strong>.\r\n<ul>\r\n<li>This turned out to be the probably most important aspect of the process: The creation of this \"avatar\" of the performer, effectively \"<strong>downloading and storing their performance and movements</strong>\", enabling me<span>&nbsp;</span><strong>countless playbacks of the same data sets<span>&nbsp;</span></strong>without all tribulations this would entail for a human performer.&nbsp;</li>\r\n</ul>\r\n</li>\r\n</ul>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/edef15306a387fbf837e976c1b244b4e.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>This almost<strong><span>&nbsp;</span>arborescent working process</strong>, starting out with the idea of mapping movements onto digital VFX, has ended up leading to<strong><span>&nbsp;</span>many new applications of its methodology<span>&nbsp;</span></strong>as well as altering the exact form of the live performance/version, making it rather<span>&nbsp;</span><strong>rhizomatic</strong>. The<strong><span>&nbsp;</span>compositional authority</strong><span>&nbsp;</span>thus is beginning to be blurred and<strong><span>&nbsp;</span>shifting away from the composer and towards the dancer (of course), and the editor of the video.&nbsp;</strong></p>\r\n<p style=\"text-align: center;\">❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎&nbsp;</p>\r\n<p>This methodology is seems<span>&nbsp;</span><strong>extremely promising and will be further explored in future projects.<span>&nbsp;</span></strong>On top of all things already explored in \"rosebud\", another<strong><span>&nbsp;</span>possible application<span>&nbsp;</span></strong>of these principles might open up possibilities for<span>&nbsp;</span><strong>\"4D-like renditions\"</strong><span>&nbsp;</span><strong>of the video versions of future pieces</strong>: Where sensors or computers control appliances like<span>&nbsp;</span><strong>ventilators<span>&nbsp;</span></strong>or other sensorially perceptible devices, thusly<span>&nbsp;</span><strong>recorded, synchronized and edited data lanes<span>&nbsp;</span></strong>(MIDI, DMX, etc.)<span>&nbsp;</span><strong>may constitute additional playback tracks for</strong><span>&nbsp;</span><strong>augmented audio/video projections</strong><span>&nbsp;</span>of what otherwise would be just a regular film screening.</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\">❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎ ❇︎&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><strong><em>&ldquo;rosebud&rdquo; &ndash; hybrid composition for dancer, sensors and live-electronics (multi-channel audio), existing in alternative versions as both a live version and as a video clip</em></strong></p>\r\n<p><em>&nbsp;Video clip (HD, 1920x1080/1080p) for projection (wall/screen). Audio available in Stereo (2.0, 2.1) or multi-channel (4.0, 4.1, 5.1, or custom [4+]).</em></p>\r\n<p>Matthias Kr&uuml;ger (Paris/Hamburg),<em><span>&nbsp;</span>direction/composition/co-choreography/electronics/video editing/mixing</em></p>\r\n<p>Victor Virnot (Paris),<span>&nbsp;</span><em>dance/choreography</em></p>\r\n<p>Zo&euml; Schreckenberg (Darmstadt),<em><span>&nbsp;</span>director of photography</em></p>\r\n<p>Lukas Ipsmiller (Vienna/Athens),<span>&nbsp;</span><em>lighting and camera assistant</em></p>\r\n<p>Rikisaburo Sato (Cologne),<span>&nbsp;</span><em>color grading</em></p>\r\n<p>Michaelias Pichlkastner (Toronto/Vienna),<span>&nbsp;</span><em>Digital VFX</em></p>\r\n<p>&nbsp;</p>\r\n<p><em>Music produced and mixed at IRCAM (Paris, 2022/2025), GRAME (Lyon, 2023), GMEM (Marseille, 2024) and CIRMMT (Montreal, 2024).</em></p>\r\n<p><em>Filmed at GMEM, Marseille on January 26, 2024.</em></p>\r\n<p>&nbsp;</p>\r\n<p>Supported by:</p>\r\n<ul>\r\n<li>Kunststiftung NRW (<strong>D&uuml;sseldorf</strong>; Arts Foundation of Northrhine-Westfalia)</li>\r\n<li>Impuls Neue Musik (<strong>Berlin</strong>)</li>\r\n<li>Dr. Christiane Hackerodt Kunst- und Kulturstiftung (<strong>Hannover</strong>)</li>\r\n<li>Centro Tedesco di Studi Veneziani (<strong>Venice</strong>)</li>\r\n<li>Le Vivier - Carrefour des Musiques Nouvelles (<strong>Montreal</strong>)</li>\r\n<li>GRAME (<strong>Lyon</strong>, Centre National de Cr&eacute;ation Musicale)</li>\r\n<li>GMEM (<strong>Marseille</strong>, Centre National de Cr&eacute;ation Musicale)</li>\r\n<li>x-io Technologies (<strong>Bristol</strong>)</li>\r\n</ul>",
        "topics": [
            {
                "id": 840,
                "name": "choreography",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 319,
                "name": "Dance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 74,
                "name": "Midi",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 167,
                "name": "Mouvement",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 950,
                "name": "OSC ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2630,
                "name": "sensors",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 127,
                "name": "Video",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 16771,
            "forum_user": {
                "id": 16768,
                "user": 16771,
                "first_name": "Matthias",
                "last_name": "Krüger",
                "avatar": "https://forum.ircam.fr/media/avatars/DSC03879.jpg",
                "avatar_url": "/media/cache/97/a8/97a82ea4726b7af9b1947e1adab21023.jpg",
                "biography": "Matthias Krüger is a composer for contemporary music based between Paris and Hamburg. Born in 1987 in Ulm (Germany), he grew up in Brussels and Trier, and studied music composition at Cologne’s Hochschule für Musik und Tanz, at Columbia University in New York City, and IRCAM in Paris, as well as French language at Cologne University and Sorbonne University (Paris). Currently he is a doctoral candidate at Hamburg's University of Music and Theatre, researching on hybrid composition between music, dance, theatre and video.\nHe has received numerous awards and scholarships, including from DAAD, the German National Academic Foundation, Berlin’s Mendelssohn competition in 2013, Cologne's B.A. Zimmermann award in 2015, the Chevillion-Bonnaud, composition award (Orléans 2016) as well as a nomination for the 2018 Gaudeamus Award (Utrecht). Residencies and research trips took and take him, among others, to Istanbul, Paris, New Zealand, Montreal where he was composer-in-residence at the Goethe-Institut and Le Vivier as well as a Graduate Researcher at McGill University Montreal/CIRMMT, and Venice, where he was Artist-in-Residence at the German Center for Venetian Studies in Venice, Italy.",
                "date_modified": "2026-02-24T18:29:13.058162+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 185,
                        "forum_user": 16768,
                        "date_start": "2026-02-13",
                        "date_end": "2027-02-13",
                        "type": 0,
                        "keys": [
                            {
                                "id": 41,
                                "membership": 185
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "matthiaskrueger",
            "first_name": "Matthias",
            "last_name": "Krüger",
            "bookmarks": []
        },
        "slug": "working-with-motion-sensor-data-in-audio-and-video-post-production-by-matthias-kruger",
        "pk": 3284,
        "published": true,
        "publish_date": "2025-02-13T00:58:52+01:00"
    },
    {
        "title": "ULYSSES 2: Hybrid Composition for Violin and Responsive AI Systems by Eleonora Sofia Podestà and Roberto Maria Cipollina.",
        "description": "Ulysses 2 is a project conceived by composer Roberto Cipollina and performed by violinist Eleonora Podestà. The work serves both as a performative and technological exploration of real-time performer-machine interaction, emphasizing the role of AI not as a passive tool, but as an active and adaptive musical agent within the creative process.",
        "content": "<p style=\"text-align: justify;\"><em>Ulysses 2</em> is the outcome of a collaborative process between composer Roberto Maria Cipollina and violinist Eleonora Sofia Podest&agrave;. The work is conceived as a closed-form improvisational<span>&nbsp; </span>structure for acoustic instrument and real-time interactive electronics, developed specifically to explore the creative potential of artificial intelligence in relation to the performer&rsquo;s improvisation.<span>&nbsp;</span></p>\r\n<p style=\"text-align: justify;\">At the core of <em>Ulysses 2</em> is the integration of Somax2, a real-time generative system<span>&nbsp; </span>developed within the Max environment, which enables responsive electronic behavior through the analysis and transformation of live performance data.</p>\r\n<p style=\"text-align: justify;\">While the project fully embraces aleatory elements and<span>&nbsp; </span>the concept of extemporaneity, it also adheres to an organized formal structure that guides its overall development. In fact, the performer engages with a series of prompts provided by the composer, ensuring a coherent trajectory.<span>&nbsp;</span></p>\r\n<p style=\"text-align: justify;\">The electronic component, built from a database of sampled sounds, responds and adapt to the performer&rsquo;s expressive gestures in real-time. Through Somax2&rsquo;s processing, the system generates musically congruent textures and transformations.<span>&nbsp;</span></p>\r\n<p style=\"text-align: justify;\">This piece highlights the software&rsquo;s ability to translate performance parameters into musically coherent electronic answers, fostering a dynamic and co-creative dialogue between human performer and machine intelligence.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<p style=\"text-align: justify;\">&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 2100,
                "name": "Composition mixte",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1945,
                "name": "generative ai",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2788,
                "name": "Improvisation, generativity and co-creative interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1875,
                "name": "violin",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 108001,
            "forum_user": {
                "id": 107866,
                "user": 108001,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PHOTO-2025-07-05-21-39-35.jpg",
                "avatar_url": "/media/cache/24/79/2479595baeb5830c15209e5b5f40e642.jpg",
                "biography": "Eleonora Sofia Podestà (2004) earned her Bachelor (2022) and Master (2024) degree in violin performance with honors at Conservatorio “G. Puccini” in La Spezia, under the guidance of Duccio Ceccanti. She also followed advanced training courses in chamber music in Scuola di Musica di Fiesole and Accademia di Musica di Pinerolo. She performs across Europe, both solo and in chamber ensembles. Her passion for new music began with LabMusCont and led her to join GAMO ensemble (Gruppo Aperto Musica Oggi). In 2024 she won a 3-year AFAM doctoral scholarship focused on contemporary violin performance, under the supervision of Alberto Gatti. In 2025 she attended a workshop with Ensemble Intercontemporain, held in Accademia \"W. Stauffer\" in Cremona, where she had the chance to work with Jeanne Marie Conquer and Diego Tosi. The focal point of her current research is the role of the performer in contemporary music, exploring elements such as interaction between performer and new technologies, extended violin techniques and improvisation.",
                "date_modified": "2026-03-02T11:53:48.560438+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "eleonorasofiapodesta",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3620,
                    "user": 108001,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4400,
                    "user": 108001,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "ulysses-2-hybrid-composition-for-violin-and-responsive-ai-systems-by-eleonora-sofia-podesta-and-roberto-maria-cipollina",
        "pk": 3620,
        "published": true,
        "publish_date": "2025-08-13T21:56:27+02:00"
    },
    {
        "title": "Max/MSP Spat library, sensors, and Unreal Engine: a workflow for a real-time generative VR project - Marta ROSSI",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>In this demo, I will demonstrate how to develop a workflow to create an immersive autogenerative project in VR using Max/MSP and Unreal Engine 5.</p>\r\n<p>Although many have written and shown how to use immersive techniques for asynchronous VR projects, very little can be found on how to set up a real-time immersive VR space.&nbsp;Using EEG and ECG Arduino sensors to generate emotionally adaptive music in Max/MSP and hardware modular synth, and encoding the sounds in HOA with the Max/MSP Spat library, I am going to render the music in real-time in binaural format for the VR headset&rsquo;s headphones.</p>\r\n<p>The headset tracking data is gathered separately in Max/MSP and UE5 to reduce to a minimum the latency. The generated music would also modify, with the relative data sent via OSC, the 3D environment and Niagara Systems in Unreal Engine 5 through bespoke referencing of Unreal blueprints.</p>\r\n<p><br />The audience will learn a method to integrate tools for the generativity and spatialisation of sound in real-time in Unreal Engine, to create interactive VR installations that challenge the interaction between the user and the artwork, destabilising the subject-object hierarchy.</p>",
        "topics": [],
        "user": {
            "pk": 24289,
            "forum_user": {
                "id": 24262,
                "user": 24289,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Marta_bw.jpg",
                "avatar_url": "/media/cache/ac/6d/ac6d8d2a29bb4623262e8a954a192916.jpg",
                "biography": "Marta Rossi (aka NoOne) is an Italian composer, performer, sound and visual artist, based in the north-east of Scotland. Deeply interested in chaos and order relationships, where ordered macro-structures emerge from chaotic and unregulated behaviours, and in living beings-to-machine interactions, she’s engaged in an aesthetic-philosophical research on how to destabilise the subject-object hierarchy and on how we can take advantage of the idea and the experience of connections. In this path she organized unconventional events of electronic music and contemporary art; she collaborated with several artists in live and theatrical performances, and produced original soundtracks for independent short films. With her duo, Silent Chaos, she performed in many venues across Italy and UK (Cryptic Nights, Sound Festival, sonADA, Listen Again Festival, and others) and worked on five studio albums. In recent years their performances focussed on immersive A/V performances, and the use of sensors in installations, like Human AutomatArt, a sensors-based large generative graphics installation.",
                "date_modified": "2026-02-12T12:52:52.057537+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "noone_511",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 30,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 26,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 27,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 86,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 24289,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "maxmsp-spat-library-sensors-and-unreal-engine-a-workflow-for-a-real-time-generative-vr-project",
        "pk": 2094,
        "published": true,
        "publish_date": "2023-02-28T17:08:02+01:00"
    },
    {
        "title": "Realtime Re-synthesis with RAVE by Simone Conforti & Alberto Gatti",
        "description": "Forum IRCAM 2026 Presentation of the Rave environment in Max \r\nSimone Conforti and Alberto Gatti",
        "content": "<p><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p>When improvising with electronic instruments and MIDI controllers, the gestural embodiment and sonic reactivity of the interface can often feel rigid, lacking the subtle nuances naturally achievable with acoustic instruments.</p>\r\n<p>AI-based sound re-synthesis introduces an element of unpredictability that enhances variability within the performance environment. This dynamic quality can foster a more expressive and responsive playing experience.</p>\r\n<p>The implementation of RAVE in Max will be presented, followed by an improvised performance integrating this technology. Techniques for real-time exploration and exploitation of the system will be demonstrated.</p>\r\n<p></p>",
        "topics": [
            {
                "id": 4265,
                "name": "Rave in real-time performances",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17504,
            "forum_user": {
                "id": 17501,
                "user": 17504,
                "first_name": "Simone",
                "last_name": "Conforti",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/bd5643c4ddc3901d7416b5450d303925?s=120&d=retro",
                "biography": "Composer, computer music designer, sound designer and software developer, born in Winterthur, graduated in Flute and Electronic Music.\r\n\r\nComputer Music Designer professor at IRCAM and Co-founder and CTO of MUSICO. \r\n\r\nFormerly co-founder of MusicFit and MUSST, has worked for ArchitetturaSonora, and as researcher for the Basel University, the HEM Geneva, the HEMU in Lausanne and the MARTLab research center in Florence.\r\n\r\nSpecialised in interactive and multimedia arts, his work passes also through an intense activity of music oriented technology design, in this field he has developed many algorithms which ranges from sound spatialisation and space virtualisation to sound masking and to generative music.\r\n\r\n\r\nHe has been professor in Electroacoustic Composition and Computer Music at the Conservatoire of Cuneo and Florence and worked as computer music designer at CIMM of Venice Biennale.",
                "date_modified": "2026-02-22T12:42:43.061633+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 784,
                        "forum_user": 17501,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [
                            {
                                "id": 524,
                                "membership": 784
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "conforti",
            "first_name": "Simone",
            "last_name": "Conforti",
            "bookmarks": []
        },
        "slug": "realtime-re-synthesis-with-rave",
        "pk": 4383,
        "published": true,
        "publish_date": "2026-02-18T12:28:30+01:00"
    },
    {
        "title": "New Tuning Theory/Practice",
        "description": "a new tuning theory",
        "content": "<p>I have elaborated a new tuning theory. It proposes the rectilinear funtion y = .083333x, quite simply a twelfth...., to form a series of ratios which when picked from the second octave series ie. 1.083333-1.916666 and applied to note 13 as point zero and doubling to 26,52,104, etc for consecutive octave tunings will preserve cherished ratios of the harmonic series. I am instantaneously reporting this prior to further elaboration as it is a big subject. <img src=\"/media/uploads/user/be1a70ab15653d65a7293bb9983012ce.png\" alt=\".083333 function\" width=\"1384\" height=\"248\" /></p>\r\n<p><img src=\"/media/uploads/user/887b5e5eecd0e91bad29152be4c49115.png\" alt=\"\" width=\"193\" height=\"328\" /> The red line is y=.083333x the green is y=12&radic;2/12x.</p>\r\n<p>My website to interact and follow progress on this is 12Fingers.org</p>\r\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 17661,
            "forum_user": {
                "id": 17657,
                "user": 17661,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7356ec9886128a3b915cfe90fc832be6?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-11-18T10:39:32.702791+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "flartec",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "new-tuning-theorypractice",
        "pk": 440,
        "published": false,
        "publish_date": "2020-01-18T01:28:20+01:00"
    },
    {
        "title": "Composer les espaces et la perception / REVELO",
        "description": "Résidence en recherche artistique 2018.19.\r\nMarco Antonio Suarez-Cifuentes.\r\nEn collaboration avec l'équipe Interaction son musique mouvement de l'Ircam-STMS et du Zentrum für Kunst und Medien (ZKM).",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">R&eacute;sidence en recherche artistique 2018.19</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p><strong>&laquo; Composer les espaces et la perception / REVELO &raquo;</strong><br />En collaboration avec l'&eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/issm/\">Interaction son musique mouvement</a><span>&nbsp;</span>de l'Ircam-STMS et du<span>&nbsp;</span><a href=\"http://zkm.de/\" target=\"_blank\">Zentrum f&uuml;r Kunst und Medien</a>(ZKM).</p>\r\n<p>L&rsquo;interaction de l&rsquo;espace acoustique, instrumental et &eacute;lectroacoustique ainsi que la conception des dispositifs sc&eacute;niques de type architectural sont devenus au fil de temps les territoires pour la construction, la r&eacute;flexion et le d&eacute;veloppement de mon propre cheminement de recherche artistique. J&rsquo;ai toujours explor&eacute; dans mes cr&eacute;ations des outils pour construire des &oelig;uvres &agrave; dimensions multiples, insaisissables d&rsquo;un seul point de vue ou d&rsquo;&eacute;coute ; des spectacles qui conf&egrave;rent au public un r&ocirc;le participatif dans la construction de la perception sonore ; des cr&eacute;ations ou le corps du musicien est un sujet actif de l&rsquo;&eacute;laboration du geste instrumental. J&rsquo;aborde une musique qui se construit dans l&rsquo;intimit&eacute; de chaque spectateur gr&acirc;ce &agrave; sa m&eacute;moire. Comme compositeur, je m&rsquo;int&eacute;resse d&rsquo;avantage &agrave; la confrontation po&eacute;tique de la perception collective avec la perception individuelle.</p>\r\n<p>Le travail de recherche que je propose dans le contexte de ma r&eacute;sidence artistique au sein de l&rsquo;&eacute;quipe Interaction son musique mouvement&nbsp; et au ZKM se dirige vers l&rsquo;exp&eacute;rimentation et le d&eacute;veloppement des outils informatiques, des technologies et des applications web-audio me permettant de composer une interaction individuelle avec chaque spectateur dans le contexte d&rsquo;un spectacle vivant augment&eacute;e.</p>\r\n<p>Je m&rsquo;int&eacute;resse d&rsquo;avantage &agrave; l&rsquo;exp&eacute;rimentation des possibles situations d&rsquo;&eacute;coute combinant des musiciens sonoris&eacute;es, des instruments hybrides, des dispositifs &eacute;lectroacoustiques (temps r&eacute;el ou diff&eacute;r&eacute;) &agrave; des sons diffus&eacute;s sous casque; &agrave; travailler en studio des alternatives pour leur mixage et leur spatialisation; &agrave; identifier en collaboration avec les equipes de recherche les probl&eacute;matiques techniques et conceptuelles qui d&eacute;coulent de l&rsquo;utilisation du web-audio. Ce projet de recherche artistique se d&eacute;veloppe sur plusieurs lignes: le web-audio, les espaces cognitifs, l&rsquo;interaction son mouvement, l&rsquo;acoustique des salles et l&rsquo;acoustique instrumentale.</p>\r\n<p>Pour pouvoir int&eacute;grer ma recherche &agrave; la formalisation des potentiels projets artistiques et comprendre les enjeux d&rsquo;&eacute;criture musicale d&eacute;riv&eacute;s de ce type de dispositif de diffusion multi-plateforme, je propose dans le cadre de la r&eacute;sidence, l'exp&eacute;rimentation publique de quelques installations/tableaux pr&eacute;figurant<span>&nbsp;</span><em>REVELO</em>, un spectacle lyrique/installation con&ccedil;u avec Nieto (metteur en sc&egrave;ne et artiste multimedia).</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Marco Antonio Suarez-Cifuentes</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202019/.thumbnails/marco_suarez_cifuentes.jpg/marco_suarez_cifuentes-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Compositeur et r&eacute;alisateur en informatique musicale, le colombien Marco Su&aacute;rez-Cifuentes s&rsquo;est form&eacute; &agrave; l&rsquo;universit&eacute; Javeriana de Bogot&aacute;, au Conservatoire national sup&eacute;rieur de musique et de danse de Paris (CNSMDP), &agrave; l&rsquo;Ircam et &agrave; la Fondation Royaumont. En 2017, il ach&egrave;ve sa th&egrave;se doctorale intitul&eacute;e &laquo; Interactions, articulations et po&eacute;tique de l&rsquo;espace instrumental, acoustique et &eacute;lectro-acoustique &raquo;, dirig&eacute;e par Fr&eacute;d&eacute;ric Bevilacqua (Ircam), Stefano Gervasoni et Luis Na&oacute;n (CNSMDP) et est promu Docteur en Arts et cr&eacute;ation, SACRe (ENS-ED 540 - PSL) / CNSMDP.</p>\r\n<p>Ses &oelig;uvres ont &eacute;t&eacute; cr&eacute;&eacute;es en Europe et en Am&eacute;rique latine, et jou&eacute;es par des musiciens de l&rsquo;Ensemble intercontemporain, Le Balcon, Multilat&eacute;rale, l'Itin&eacute;raire, EOC, l'Instant donn&eacute;, XAMP, Vortex, Contrechamps, Onyx, Decibelio. Depuis 2003, il est successivement en r&eacute;sidence artistique&nbsp; au GNEM, CMM du CENART &agrave; Mexico, Studio Musiques Inventives d&rsquo;Annecy, Muse en Circuit, GRAME, studio Art ZOYD ; compositeur r&eacute;f&eacute;rent pour Transforme 2008 (Royaumont); et compositeur au sein de l&rsquo;&eacute;quipe Interaction Son Musique Mouvement (anciennement Interactions musicales temps r&eacute;el) &agrave; l'Ircam en 2010. Son travail est soutenu par le Minist&egrave;re de la Culture, l&rsquo;Ircam, Radio France, Voix Nouvelles, la SACEM, le Minist&egrave;re de la Culture de&nbsp; Colombie, les Fondations Carolina Oramas, Mazda, Meyer et Tarrazi.</p>\r\n<p>Marco Antonio a enseign&eacute; la composition aux CRD de Romainville et de Laval (2008 - 2016). Il est r&eacute;guli&egrave;rement invit&eacute; en tant que professeur&nbsp; de composition &agrave; l&rsquo;universit&eacute; Javeriana de Bogot&aacute;.</p>\r\n<p>Depuis 2017, il collabore avec le metteur en sc&egrave;ne Nieto sur un spectacle lyrique et visuel intitul&eacute;<span>&nbsp;</span><em>REVELO</em>.</p>\r\n</div>\r\n</div>\r\n<p><strong>Courriel :</strong><span>&nbsp;</span>Marco.Suarez (at) ircam.fr</p>\r\n<ul class=\"unstyled-list\">\r\n<li class=\"mb1\"><strong>&Eacute;quipe :<span>&nbsp;</span></strong><a href=\"https://www.ircam.fr/recherche/equipes-recherche/ismm/\">Interaction son musique mouvement</a><span>&nbsp;</span>(Sacre CNSMDP)<span>&nbsp;</span></li>\r\n</ul>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"https://marcosuarezcifuentes.wordpress.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>https://marcosuarezcifuentes.wordpress.com/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "composer-les-espaces-et-la-perception-revelo",
        "pk": 23,
        "published": true,
        "publish_date": "2019-03-21T15:47:11+01:00"
    },
    {
        "title": "CCRMA",
        "description": "Hello everyone,\n\nlike last year, Andrea Agostini, Julien Vincenot, Davor Vincze and myself will hold a beginner-level summer seminar on the bach library for Max at CCRMA (Stanford University). It is an online seminar, from August 30th to September 3rd. The course will be held from 9am to 1pm Pacific Time, which means 18pm to 22pm Central European time.\n\nMore information and the course syllabus can be found here:\nhttps://ccrma.stanford.edu/workshops/bach-in-maxmsp\n\nTo enroll:\nhttps://www.eventbrite.com/o/ccrma-summer-workshops-33124778619\n\nEarly enrollment is still valid till August 20th and gives a 50$ discount (bach patrons get an additional 50$ discount).\n\nThe syllabus is given as a reference, so that people who are already quite familiar with most of the topics may refrain from enrolling. Notice that it is NOT a very advanced seminar, so those of you who already do crazy things with bach & co. probably will not need it. However, it is meant to give a good, all-round overview of several subjects, so it may also be useful to fill in some blanks.\n\nBest,\nDaniele Ghisi",
        "content": "",
        "topics": [],
        "user": {
            "pk": 375,
            "forum_user": {
                "id": 375,
                "user": 375,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/35e1cc14164e2b11037f9652f4f11972?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "danieleghisi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ccrma",
        "pk": 977,
        "published": false,
        "publish_date": "2021-08-17T00:16:06.214279+02:00"
    },
    {
        "title": "CCRMA summer course: introduction to bach",
        "description": "Hello everyone,\n\nlike last year, Andrea Agostini, Julien Vincenot, Davor Vincze and myself will hold a beginner-level summer seminar on the bach library for Max at CCRMA (Stanford University). It is an online seminar, from August 30th to September 3rd. The course will be held from 9am to 1pm Pacific Time, which means 18pm to 22pm Central European time.\n\nMore information and the course syllabus can be found here:\nhttps://ccrma.stanford.edu/workshops/bach-in-maxmsp\n\nTo enroll:\nhttps://www.eventbrite.com/o/ccrma-summer-workshops-33124778619\n\nEarly enrollment is still valid till August 20th and gives a 50$ discount (bach patrons get an additional 50$ discount).\n\nThe syllabus is given as a reference, so that people who are already quite familiar with most of the topics may refrain from enrolling. Notice that it is NOT a very advanced seminar, so those of you who already do crazy things with bach & co. probably will not need it. However, it is meant to give a good, all-round overview of several subjects, so it may also be useful to fill in some blanks.\n\nBest,\nDaniele Ghisi",
        "content": "<p>Hello everyone,</p>\n<p>like last year, Andrea Agostini, Julien Vincenot, Davor Vincze and myself will hold a beginner-level summer seminar on the bach library for Max at CCRMA (Stanford University). It is an online seminar, from August 30th to September 3rd. The course will be held from 9am to 1pm Pacific Time, which means 18pm to 22pm Central European time.</p>\n<p>More information and the course syllabus can be found here:<br />https://ccrma.stanford.edu/workshops/bach-in-maxmsp</p>\n<p>To enroll:<br />https://www.eventbrite.com/o/ccrma-summer-workshops-33124778619</p>\n<p>Early enrollment is still valid till August 20th and gives a 50$ discount (bach patrons get an additional 50$ discount).</p>\n<p>The syllabus is given as a reference, so that people who are already quite familiar with most of the topics may refrain from enrolling. Notice that it is NOT a very advanced seminar, so those of you who already do crazy things with bach &amp; co. probably will not need it. However, it is meant to give a good, all-round overview of several subjects, so it may also be useful to fill in some blanks.</p>\n<p>Best,<br />Daniele Ghisi</p>",
        "topics": [],
        "user": {
            "pk": 375,
            "forum_user": {
                "id": 375,
                "user": 375,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/35e1cc14164e2b11037f9652f4f11972?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "danieleghisi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ccrma-summer-course-introduction-to-bach",
        "pk": 976,
        "published": false,
        "publish_date": "2021-08-17T00:15:37.604497+02:00"
    },
    {
        "title": "Indian Ethnic & Traditional Kids Wear Online Collection | JOVI INDIA",
        "description": "Ethnic kidswear from Jovin Fashion Parents—The kidswear is of high quality, and the kidswear in Jovin Fashion kidswear store is known for its high quality, classic styling, and careful workmanship. Our list of indian ethnic and traditional kids dresses is curated with 5–10 best designers who know what timeless style means when dressing up your child. ",
        "content": "<p><span style=\"\">Ethnic kidswear from Jovin Fashion Parents&mdash;The kidswear is of high quality, and the kidswear in Jovin Fashion kidswear store is known for its high quality, classic styling, and&ensp;careful workmanship. Our list of<a href=\"https://www.joviindia.com/collections/indian-ethnic-kids-dresses\"> </a></span><a href=\"https://www.joviindia.com/collections/indian-ethnic-kids-dresses\"><strong>indian ethnic and traditional kids dresses</strong></a><span style=\"\"><a href=\"https://www.joviindia.com/collections/indian-ethnic-kids-dresses\"> </a>is curated with 5&ndash;10 best designers&ensp;who know what timeless style means when dressing up your child. Find stunning indian kids' dresses online in India that&ensp;are inspired by the cultural magnificence of the country, which come packed with age-old classics. At JOVI India, it's excellence all the way with every garment&ensp;, and that excellence comes with meticulously selected fabric, perfectly cut patterns, robust construction, and detailed craftsmanship. We are all for beauty and function co-existing (so much co-existing), so your child can dazzle in stylish (and comfortable) ethnic wear&mdash;whether&ensp;it's wedding attire or time-honoured accessories. Our range extends from XS to 6XL, including tailored size options for those&ensp;who want the perfect fit. Buy exquisite </span>indian ethnic kids dress online<span style=\"\"> with free shipping at Zojogi for cities in India, Singapore, Dubai, Australia, and get&ensp;free international shipping on orders worth $500 USD. Use code JOVI25 NOW to get 25% off for a limited time and with JOVI India for unparalleled elegance, comfort&ensp;, and craftsmanship.</span></p>\n<p><span style=\"\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/7ae8c61e26ae5ca81deabcf9f58c4397.jpg\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0772d3e611c23439178684ea238fef10.jpg\"></span></p>",
        "topics": [
            {
                "id": 4527,
                "name": "business",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4526,
                "name": "fashion",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4531,
                "name": "indiandress",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4532,
                "name": "indiankidsdress",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4529,
                "name": "kidsclothing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4530,
                "name": "kidsdresses",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4528,
                "name": "kidswear",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166245,
            "forum_user": {
                "id": 166009,
                "user": 166245,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/2c81463b09596f5fe3f61def2d20b47c?s=120&d=retro",
                "biography": "Auraphia works at JOVI INDIA, a brand focused on creating the Indian Ethnic and Traditional Dresses for Kids in India. With a passion for professional style, Auraphia represents a brand that blends elegance, comfort, and confidence in modern workplace fashion.",
                "date_modified": "2026-03-31T10:54:51.408847+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "auraphia",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "indian-ethnic-traditional-kids-wear-online-collection-jovi-india",
        "pk": 4561,
        "published": false,
        "publish_date": "2026-03-31T10:58:38.932449+02:00"
    },
    {
        "title": "RETEXTULE",
        "description": "Exploratory glitch delay",
        "content": "<div>\r\n<div>We are really excited to release :::: RETEXTULE</div>\r\n<div>&nbsp;</div>\r\n</div>\r\n<div>\r\n<div>Exploratory glitch delay module by Fendoap.</div>\r\n<div>&nbsp;</div>\r\n</div>\r\n<div>\r\n<div><a href=\"https://www.audiobulb.com/RETEXTULE.htm?fbclid=IwAR2YdsHIU7Du6G42sSzOMuPfS17qBjR-WM_39rpTxbqXCHVnknnCxxf3-80\">https://www.audiobulb.com/RETEXTULE.htm</a></div>\r\n<div>&nbsp;</div>\r\n</div>\r\n<div>\r\n<div>RETEXTULE is a 4-parallel looper type effector that can change the playback speed. Interesting textures can be created with parallel loopers with variable playback speed. You can reinforce pad sounds by layering reverse sounds, octaves up, or slightly pitch-shifted sounds. Since the playback speed can be finely set with a decimal point, you can create a mysterious sound that creates a series of non-integer overtones.</div>\r\n<div>&nbsp;</div>\r\n<div><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5c83331fc8e482871c74bf57a6e72c82.png\" /></div>\r\n<div>&nbsp;</div>\r\n</div>\r\n<div>\r\n<div>Use it on field-recorded sound for interesting sounds, or add to ambient or drones to create unexpected textures.</div>\r\n<div>&nbsp;</div>\r\n<div>Each parallel looper is a windowed multi(1~12)-tap delay. Each is evenly arranged with a phase shift. You can change the playback speed to create effects like reverse delays and intermediate granular effects.</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 43505,
            "forum_user": {
                "id": 43447,
                "user": 43505,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/2022_Audiobulb_Logo_Icon_Square_WB.png",
                "avatar_url": "/media/cache/d5/ed/d5eda0be81592f9401f1639212e905e2.jpg",
                "biography": null,
                "date_modified": "2023-05-20T11:47:42+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "audiobulb",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "retextule",
        "pk": 2247,
        "published": true,
        "publish_date": "2023-05-20T12:49:54+02:00"
    },
    {
        "title": "\"Interstices of Sound and Light\" by Hsiao, Yung-Shen (Taiwan)",
        "description": "\"Interstices of Sound and Light\" is a musical inscription of Taipei’s urban memory, exploring the interplay of sound, time, and space through pipa, cello, and eight-channel electronic music.",
        "content": "<p></p>\r\n<div><span><strong>Interstices of Sound and Light: Weaving Taipei's Memory through Eight-Channel Interaction</strong></span></div>\r\n<div>&nbsp;</div>\r\n<div><span>\"Interstices of Sound and Light\" is a musical inscription of Taipei&rsquo;s urban memory, exploring the interplay of sound, time, and space through pipa, cello, and eight-channel electronic music. Inspired by Taipei&rsquo;s sonic fragments&mdash;morning rain, the clamor of night markets, the nocturnal hum of low frequencies&mdash;these memory shards, chaotic yet orderly, warm yet solitary, resemble half-open doors, each leading to larger, endless recollections. The piece captures the fusion of the city&rsquo;s nature and metropolis, history and modernity, delineating the layers and flow of memory.</span></div>\r\n<div><span>&nbsp;</span></div>\r\n<div><span>The work employs an eight-channel spatial design, utilizing ChucK software to create real-time interactive soundscapes, integrating transformed urban and instrumental samples into live pipa and cello performances. ChucK&rsquo;s real-time algorithms enable dynamic transformations of urban samples and soundfield variations, allowing sounds to shift between whispers and surges, simulating Taipei&rsquo;s pulse. A live electronic improviser, integrated into the performance, contributes spontaneous sonic gestures, enriching the interactive soundfield with authentic urban resonance.</span></div>\r\n<div><span>The &ldquo;interstices&rdquo; symbolize the fractures and ambiguities of time, while &ldquo;light and shadow&rdquo; metaphorize the city&rsquo;s flow and transformation. Pipa and cello, bridging tradition and contemporaneity, converse with the electronic layer, forming a sonic palimpsest. This presentation explores how \"Interstices of Sound and Light\" weaves Taipei&rsquo;s memory through eight-channel interaction, blending technical innovation with cultural introspection, and invites listeners to reflect: what memories linger in the sonic crevices of your city?</span></div>",
        "topics": [
            {
                "id": 3542,
                "name": "8 channels",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3543,
                "name": "ChucK",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3540,
                "name": "city",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1820,
                "name": "interactive live electronics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3541,
                "name": "Taipei",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 111783,
            "forum_user": {
                "id": 111640,
                "user": 111783,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/8327716a11854b1bf49dc3c9df1b395c?s=120&d=retro",
                "biography": "Hsiao, Yung-Shen\n\nPhD in Music Composition, University of Bristol,UK. \nYung-Sheng Hsiao’s musical oeuvre encompasses concert instrumental music, electroacoustic music, and music for film, spanning diverse genres and integrating Eastern and Western musical materials. The poetic quality and fusion of sound remain central to his creative process. His works have been selected and performed at music festivals and electronic music centers in major cities across the UK, USA, Italy, and Taiwan. In recent years, he has collaborated with chamber ensembles and musicians worldwide. He was commissioned by the Pipa Ensemble to create a multi-channel live electroacoustic work for the Taipei Chinese Orchestra (TCO) Traditional Arts Season. In 2023, his work was selected for the 50th Anniversary Concert of the Asian Composers League (ACL) Taiwan and performed at the NTSO Wufeng Music Festival. In 2017, his composition was chosen to represent Taiwan at the 64th International Rostrum of Composers organized by UNESCO in Italy. His works have also been featured at the 2016 Bristol New Music Series, 2014 Bristol New Music Festival, the 29th Asian Composers League Asia-Pacific Music Festival (2011).",
                "date_modified": "2026-01-20T14:42:12.370465+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "hsiao",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3851,
                    "user": 111783,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 45,
                    "user": 111783,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "hsiao-yung-shen",
        "pk": 3851,
        "published": true,
        "publish_date": "2025-10-13T11:43:32+02:00"
    },
    {
        "title": "Somax for Live by Marco Fiorini",
        "description": "Discover Somax for Live: a new tool bringing the musical intelligence of Somax2 into the native environment of Ableton Live. This presentation explores how the system opens a fluid dialogue between musician and machine for co-improvisation, composition and performance.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>As part of the REACH project in the Music Representation team at IRCAM, Somax for Live brings the real-time interactive capabilities of <a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">Somax2</a>&nbsp;directly into&nbsp;Ableton Live.&nbsp;</p>\r\n<p>Developed by&nbsp;Manuel Poletti&nbsp;in collaboration with&nbsp;Marco Fiorini&nbsp;and&nbsp;G&eacute;rard Assayag, this new integration bridges advanced symbolic AI improvisation with a widely used digital audio workstation, opening new creative workflows for composers, performers, and producers.</p>\r\n<p>Implemented as a collection&nbsp;of Max for Live devices,&nbsp;Somax for Live&nbsp;allows users to interactively co-create with the system within Live&rsquo;s native environment, combining the temporal and stylistic modeling of Somax2 with the flexibility of Live&rsquo;s clips, automations, and control interfaces. This tight coupling between musical intelligence and production tools encourages a fluid dialogue between human and machine musicianship, enabling adaptive accompaniment, generative composition, and exploratory performance practices within an accessible and modular setup.</p>\r\n<p>This presentation will showcase the architecture, interaction paradigms, and artistic use cases of&nbsp;Somax for Live, illustrating how the REACH project advances hybrid human&ndash;AI co-creativity in contemporary music-making.</p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2788,
                "name": "Improvisation, generativity and co-creative interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4245,
                "name": "somax for live",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "somax-for-live-by-marco-fiorini",
        "pk": 4369,
        "published": true,
        "publish_date": "2026-02-16T14:54:09+01:00"
    },
    {
        "title": "embroidered-shawls to Elevate Your Fashion Look",
        "description": "embroidered-shawls are beautifully crafted fashion pieces featuring detailed embroidery on premium fabrics. Known for their elegance and warmth, these shawls are perfect for winter styling, festive occasions, and weddings.\n",
        "content": "<h2>What Are embroidered-shawls?</h2>\n<p><a href=\"https://elaboreluxury.com/collections/embroidered-shawls\">embroidered-shawls</a> are premium fashion accessories known for their intricate needlework and luxurious feel. These shawls are crafted using high-quality fabrics like wool, cashmere, and pashmina, making them both stylish and comfortable for winter wear.</p>\n<p>They are widely loved for their artistic embroidery that adds elegance and uniqueness to every piece.</p>\n<hr>\n<h2>Why embroidered-shawls Are Always in Demand</h2>\n<p>The popularity of embroidered-shawls comes from their perfect combination of beauty and functionality. They not only provide warmth but also enhance your overall look with detailed craftsmanship.</p>\n<p>Reasons for their demand:</p>\n<ul>\n<li>Unique handcrafted designs</li>\n<li>Perfect for all age groups</li>\n<li>Suitable for both casual and festive wear</li>\n<li>Long-lasting quality</li>\n</ul>\n<hr>\n<h2>Popular Embroidery Styles in embroidered-shawls</h2>\n<p>embroidered-shawls come in a variety of embroidery styles, each representing rich cultural traditions:</p>\n<ul>\n<li><strong>Sozni Embroidery:</strong> Fine and detailed needlework</li>\n<li><strong>Aari Work:</strong> Chain stitch embroidery with bold patterns</li>\n<li><strong>Tilla Work:</strong> Gold and metallic thread embroidery</li>\n<li><strong>Kashmiri Motifs:</strong> Inspired by nature like flowers and leaves</li>\n</ul>\n<p>These styles make every shawl visually stunning and exclusive.</p>\n<hr>\n<h2>Best Occasions to Wear embroidered-shawls</h2>\n<p>embroidered-shawls are versatile and can be worn on multiple occasions:</p>\n<ul>\n<li>Weddings and festive celebrations</li>\n<li>Formal events and parties</li>\n<li>Daily winter wear</li>\n<li>Traditional and cultural functions</li>\n</ul>\n<p>They instantly add a royal and graceful touch to any outfit.</p>\n<hr>\n<h2>How to Choose the Right embroidered-shawls</h2>\n<p>When buying embroidered-shawls, consider these factors:</p>\n<ul>\n<li>Fabric quality (Pashmina, wool, cashmere)</li>\n<li>Type of embroidery work</li>\n<li>Color and design suitability</li>\n<li>Authenticity of craftsmanship</li>\n</ul>\n<p>Choosing the right shawl ensures both comfort and style.</p>\n<hr>\n<h2>Styling Ideas for embroidered-shawls</h2>\n<p>Make the most of your embroidered-shawls with these styling tips:</p>\n<ul>\n<li>Pair with ethnic wear for a traditional look</li>\n<li>Combine with jeans and tops for fusion fashion</li>\n<li>Use as a statement accessory for minimal outfits</li>\n<li>Drape elegantly for formal occasions</li>\n</ul>\n<hr>\n<h2>Benefits of Investing in embroidered-shawls</h2>\n<p>embroidered-shawls are not just fashion items but valuable wardrobe investments:</p>\n<ul>\n<li>Timeless style that never goes out of trend</li>\n<li>Durable and long-lasting</li>\n<li>Suitable for multiple occasions</li>\n<li>Reflects luxury and elegance</li>\n</ul>\n<hr>\n<h2>Conclusion</h2>\n<p>embroidered-shawls represent the perfect blend of tradition, craftsmanship, and modern fashion. Their intricate embroidery, soft texture, and elegant designs make them a must-have accessory for every wardrobe.</p>\n<p>If you want to enhance your style with something truly unique, embroidered-shawls are the perfect choice.</p>",
        "topics": [
            {
                "id": 4539,
                "name": "embroidered-shawls",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166341,
            "forum_user": {
                "id": 166105,
                "user": 166341,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a053613fe6f95130b8e798ec65e5832b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-01T13:44:58.436606+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "elaboreluxury",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "embroidered-shawls-to-elevate-your-fashion-look",
        "pk": 4576,
        "published": false,
        "publish_date": "2026-04-02T06:49:28.471971+02:00"
    },
    {
        "title": "Intuitive Composing in Jazz by Sara Simionato",
        "description": "The research project Intuitive Composing in Jazz explores the integration of AI technologies into the creative process of jazz improvisation and composition, utilizing AI-tools such as RAVE and Somax2 as co-creative agents. The resulting musical output is recorded, transcribed and analyzed to investigate how intuition and embodiment can be affected in the human-machine interactive co-creative process, and ultimately, to generate new music scores.\r\nWhile AI has been explored in various musical genres, including classical, electronic, and popular music, its application to jazz remains generally unexplored in an artistic research context. \r\nAlthough the jazz compositional process often employs improvisation in its intuitive and embodied nature, there is a lack of literature focusing on these aspects; furthermore, there is a gap in knowledge regarding embodiment and creativity in musical AI. \r\nBuilding on the ongoing research of the CREATIE research group, the project aims to improve and develop new approaches, methodologies and skills regarding the human-machine co-creative process in jazz improvisation and composition.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/426768ab21d986261983277e928dedb7.jpg\" width=\"795\" height=\"446\" /></span></p>\r\n<p>This research project explores the integration of AI technologies into the creative process of jazz improvisation and composition, utilizing Somax2 as a co-creative agent. The central research question is: <em>How do AI-driven systems influence intuitive and embodied creativity in jazz improvisation and composition?</em></p>\r\n<p>While AI has been explored in various musical genres, including classical, electronic, and popular music, its application to jazz remains generally unexplored in an artistic research context. Although the jazz compositional process often employs improvisation in its intuitive and embodied nature, there is a lack of literature focusing on these aspects; furthermore, there is a gap in knowledge regarding embodiment and creativity in musical AI. This project addresses these gaps, exploring the human-machine relationship in an embodied creative context, and suggesting new possibilities for jazz improvisation and composition.</p>\r\n<p>The methodological approach involves improvised music sessions that are recorded, transcribed and analysed to investigate how intuition and embodiment can be affected in the human-machine interactive co-creative process, and ultimately, to generate new music scores. In this framework, artistic practice serves as the primary method of investigation: the process of improvising with AI tools offers experiential, analytical and reflective knowledge, while the transcription and analysis of improvised music sessions offer a way to question, test and refine the methodologies throughout the research.<strong> </strong>In addition, semi-structured interviews are conducted with musicians and experts in musical AI interaction. These interviews provide further insight into performers&rsquo; embodied experience of human&ndash;machine co-creation and inspire new understandings and methodologies inside the process.</p>\r\n<p>Building on the ongoing research of the CREATIE research group, the project aims to improve and develop new approaches, methodologies and skills regarding the human-machine co-creative process in jazz improvisation and composition.</p>",
        "topics": [
            {
                "id": 3462,
                "name": "AI & Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4075,
                "name": "Jazz Composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4074,
                "name": "Jazz Improvisation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 153964,
            "forum_user": {
                "id": 153740,
                "user": 153964,
                "first_name": "Sara",
                "last_name": "Simionato",
                "avatar": "https://forum.ircam.fr/media/avatars/62cabc13-a8a9-447e-8d65-e9785ad0f6c7.jpg",
                "avatar_url": "/media/cache/fc/51/fc5120d36451dd760569f2119a5ebc96.jpg",
                "biography": "Sara Simionato is a Brussels-based singer and composer from Venice whose work spans contemporary jazz, improvisation, and chamber music. Her research explores embodied practices and the integration of technology and AI into the creative process. She performs across Europe as a singer, bandleader, and composer; she teaches Vocal Improvisation Techniques at the Royal Conservatory of Antwerp, where she also conducts her current research project Intuitive Composing in Jazz.",
                "date_modified": "2026-02-05T12:46:57.989247+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sarasimionato",
            "first_name": "Sara",
            "last_name": "Simionato",
            "bookmarks": []
        },
        "slug": "intuitive-composing-in-jazz-by-sara-simionato",
        "pk": 4264,
        "published": true,
        "publish_date": "2026-01-26T12:06:34+01:00"
    },
    {
        "title": "Women Pashmina Stoles – A Perfect Blend of Tradition & Luxury",
        "description": "Women Pashmina Stoles are a perfect combination of luxury, warmth, and timeless elegance. Made from fine Himalayan wool and handcrafted by skilled artisans, these stoles are lightweight, soft, and ideal for all seasons.",
        "content": "<h2>Introduction to Women Pashmina Stoles</h2>\n<p><a href=\"https://elaboreluxury.com/collections/women-pashmina-stoles\">Women Pashmina Stoles</a> are one of the most luxurious and versatile fashion accessories, known for their unmatched softness and elegance. Crafted from the fine wool of the Changthangi goat found in the Himalayas, these stoles are often referred to as the &ldquo;soft gold of the Himalayas.&rdquo;</p>\n<p>They are not just winter wear but timeless pieces that represent heritage, craftsmanship, and sophistication.</p>\n<hr>\n<h2>What Makes Women Pashmina Stoles Special?</h2>\n<p>The uniqueness of Women Pashmina Stoles lies in their premium quality and handcrafted artistry. Each stole is carefully woven by skilled Kashmiri artisans using traditional techniques.</p>\n<p>Key features include:</p>\n<ul>\n<li>Ultra-soft and lightweight texture</li>\n<li>Exceptional warmth and comfort</li>\n<li>Elegant and timeless designs</li>\n<li>Suitable for all seasons</li>\n</ul>\n<hr>\n<h2>Craftsmanship Behind Women Pashmina Stoles</h2>\n<p>Creating Women Pashmina Stoles is a detailed and time-consuming process that reflects true craftsmanship:</p>\n<ul>\n<li><strong>Wool Collection:</strong> Fine fibre sourced from Himalayan goats</li>\n<li><strong>Hand Spinning:</strong> Threads are created manually to maintain softness</li>\n<li><strong>Hand Weaving:</strong> Crafted on traditional looms</li>\n<li><strong>Finishing:</strong> Washed, fringed, and quality-checked</li>\n</ul>\n<p>This process can take weeks or even months, making each piece unique and valuable.</p>\n<hr>\n<h2>Types of Women Pashmina Stoles</h2>\n<p>There are different varieties of Women Pashmina Stoles available:</p>\n<ul>\n<li>Pure Pashmina Stoles</li>\n<li>Zari Pashmina Stoles</li>\n<li>Kani Pashmina Stoles</li>\n<li>Embroidered Pashmina Stoles</li>\n<li>Kalamkari Pashmina Stoles</li>\n<li>Modern Minimalist Stoles</li>\n</ul>\n<p>Each type offers a unique blend of traditional and modern fashion.</p>\n<hr>\n<h2>Why Choose Women Pashmina Stoles?</h2>\n<p>Women Pashmina Stoles are a must-have for anyone who values luxury and elegance:</p>\n<ul>\n<li>Perfect for weddings and festive occasions</li>\n<li>Ideal for both ethnic and western outfits</li>\n<li>Lightweight yet extremely warm</li>\n<li>Long-lasting and timeless</li>\n</ul>\n<p>These stoles are not just accessories but a symbol of refined taste and heritage.</p>\n<hr>\n<h2>Styling Tips for Women Pashmina Stoles</h2>\n<p>You can style Women Pashmina Stoles in multiple ways:</p>\n<ul>\n<li>Drape over sarees or suits for a traditional look</li>\n<li>Pair with western outfits for a modern touch</li>\n<li>Use as a winter wrap for warmth</li>\n<li>Add elegance to formal and casual outfits</li>\n</ul>\n<hr>\n<h2>Care Tips for Women Pashmina Stoles</h2>\n<p>To maintain the quality of your Women Pashmina Stoles:</p>\n<ul>\n<li>Dry clean only</li>\n<li>Store in a cotton or muslin cloth</li>\n<li>Avoid direct sunlight</li>\n<li>Keep away from perfumes and moisture</li>\n</ul>\n<hr>\n<h2>Conclusion</h2>\n<p>Women Pashmina Stoles are the perfect blend of luxury, comfort, and timeless fashion. Their softness, warmth, and handcrafted beauty make them an essential addition to every wardrobe.</p>\n<p>If you are looking for elegance and authenticity, Women Pashmina Stoles are the ideal choice for both everyday wear and special occasions.</p>",
        "topics": [
            {
                "id": 4541,
                "name": "Women Pashmina Stoles",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166341,
            "forum_user": {
                "id": 166105,
                "user": 166341,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a053613fe6f95130b8e798ec65e5832b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-01T13:44:58.436606+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "elaboreluxury",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "women-pashmina-stoles-a-perfect-blend-of-tradition-luxury",
        "pk": 4577,
        "published": false,
        "publish_date": "2026-04-02T07:00:03.744464+02:00"
    },
    {
        "title": "luck882cocoms",
        "description": "luck882cocoms",
        "content": "<p>luck8 l&agrave; nền tảng giải tr&iacute; trực tuyến được nhiều người chơi tin tưởng nhờ hệ thống vận h&agrave;nh ổn định v&agrave; trải nghiệm mượt m&agrave; tr&ecirc;n mọi thiết bị. Với giao diện hiện đại, dễ sử dụng, luck8 mang đến cảm gi&aacute;c th&acirc;n thiện cho cả người mới lẫn những game thủ l&acirc;u năm. nh&agrave; c&aacute;i luck8 ch&uacute; trọng đầu tư v&agrave;o c&ocirc;ng nghệ bảo mật ti&ecirc;n tiến, gi&uacute;p bảo vệ th&ocirc;ng tin v&agrave; giao dịch của người chơi một c&aacute;ch tối đa. B&ecirc;n cạnh đ&oacute;, kho tr&ograve; chơi tại luck8 v&ocirc; c&ugrave;ng phong ph&uacute;, từ c&aacute; cược thể thao, game b&agrave;i trực tuyến, slot game cho đến bắn c&aacute; hấp dẫn. Tốc độ nạp r&uacute;t nhanh ch&oacute;ng, quy tr&igrave;nh đơn giản c&ugrave;ng đội ngũ hỗ trợ 24/7 lu&ocirc;n sẵn s&agrave;ng giải đ&aacute;p mọi thắc mắc, gi&uacute;p người chơi y&ecirc;n t&acirc;m trải nghiệm. Ngo&agrave;i ra, <a href=\"https://luck882.co.com/\">https://luck882.co.com/</a> c&ograve;n thường xuy&ecirc;n triển khai c&aacute;c chương tr&igrave;nh ưu đ&atilde;i hấp dẫn, mang lại nhiều cơ hội gia tăng gi&aacute; trị giải tr&iacute; cho người tham gia. Đ&acirc;y l&agrave; lựa chọn ph&ugrave; hợp cho những ai đang t&igrave;m kiếm một s&acirc;n chơi trực tuyến an to&agrave;n, tiện lợi v&agrave; đ&aacute;ng tin cậy.</p>",
        "topics": [],
        "user": {
            "pk": 166611,
            "forum_user": {
                "id": 166374,
                "user": 166611,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a1cb291864ed0cc2c1c7308db2d2b051?s=120&d=retro",
                "biography": "luck8 là nền tảng giải trí trực tuyến được nhiều người chơi tin tưởng nhờ hệ thống vận hành ổn định và trải nghiệm mượt mà trên mọi thiết bị. Với giao diện hiện đại, dễ sử dụng, luck8 mang đến cảm giác thân thiện cho cả người mới lẫn những game thủ lâu năm. nhà cái luck8 chú trọng đầu tư vào công nghệ bảo mật tiên tiến, giúp bảo vệ thông tin và giao dịch của người chơi một cách tối đa. Bên cạnh đó, kho trò chơi tại luck8 vô cùng phong phú, từ cá cược thể thao, game bài trực tuyến, slot game cho đến bắn cá hấp dẫn. Tốc độ nạp rút nhanh chóng, quy trình đơn giản cùng đội ngũ hỗ trợ 24/7 luôn sẵn sàng giải đáp mọi thắc mắc, giúp người chơi yên tâm trải nghiệm. Ngoài ra, https://luck882.co.com/ còn thường xuyên triển khai các chương trình ưu đãi hấp dẫn, mang lại nhiều cơ hội gia tăng giá trị giải trí cho người tham gia. Đây là lựa chọn phù hợp cho những ai đang tìm kiếm một sân chơi trực tuyến an toàn, tiện lợi và đáng tin cậy.",
                "date_modified": "2026-04-04T16:54:56.219776+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "luck882cocoms",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "luck882cocoms",
        "pk": 4592,
        "published": false,
        "publish_date": "2026-04-04T16:53:49.784634+02:00"
    },
    {
        "title": "Il existe d'autres mondes dont ils ne vous ont pas parlé. Ils souhaitent vous parler - Felix Römer",
        "description": "Une pièce radiophonique générative de Felix Römer, improvisée en temps réel par deux interprètes humains et deux interprètes artificiels.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\">Pr&eacute;sent&eacute; par : Felix&nbsp;<span>R&ouml;mer</span><br /><a href=\"https://forum.ircam.fr/profile/fiedert/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\"><a href=\"https://forum.ircam.fr/profile/fiedert/\"><strong></strong></a></p>\r\n<p style=\"text-align: justify;\"><strong>\"Il y a d'autres mondes dont ils ne vous ont pas parl&eacute;. Ils souhaitent vous parler\"</strong>. (nomm&eacute;e d'apr&egrave;s une citation du musicien de jazz Sun Ra) est une pi&egrave;ce radiophonique g&eacute;n&eacute;rative, qui r&eacute;imagine le format traditionnel de la pi&egrave;ce radiophonique &agrave; travers les capacit&eacute;s de l'intelligence artificielle contemporaine. En c&eacute;l&eacute;brant un si&egrave;cle de radio, elle peut &ecirc;tre consid&eacute;r&eacute;e comme un hommage &agrave; l'esprit d'exp&eacute;rimentation et de curiosit&eacute; technologique de l'avant-garde des ann&eacute;es 1920.</p>\r\n<p style=\"text-align: justify;\">La pi&egrave;ce sera improvis&eacute;e en temps r&eacute;el par quatre interpr&egrave;tes qui s'&eacute;coutent et r&eacute;agissent les uns aux autres : deux musiciens (un humain et une machine) et deux acteurs vocaux (&eacute;galement un humain et une machine). En utilisant un m&eacute;lange d'algorithmes de classification et de g&eacute;n&eacute;ration, les interpr&egrave;tes artificiels sont capables (a) d'&eacute;couter et d'analyser leurs homologues humains et (b) de g&eacute;n&eacute;rer de la musique / de la parole en cons&eacute;quence.</p>\r\n<p style=\"text-align: justify;\">Les outils &lt;RAVE&gt; et &lt;prior&gt; de l'IRCAM sont au c&oelig;ur de ce projet. Leur esth&eacute;tique particuli&egrave;re est utilis&eacute;e de mani&egrave;re &agrave; remettre en question les conventions narratives en brouillant la distinction entre la parole et la musique. S'inspirant de pionniers de l'avant-garde comme Kurt Schwitters, <strong>\"There Are Other Worlds...\"</strong> se d&eacute;ploie comme une cacophonie de syllabes absurdes, rappelant l'exp&eacute;rimentation linguistique du d&eacute;but du XXe si&egrave;cle : plut&ocirc;t que d'explorer un contenu explicite et descriptible, l'&oelig;uvre se concentre sur les qualit&eacute;s sonores des mots et des phrases. Qu'est-ce que le langage peut ou ne peut pas communiquer lorsqu'il est d&eacute;pouill&eacute; de tout contenu s&eacute;mantique ? Afin de trouver leurs propres r&eacute;ponses &agrave; cette question, les auditeurs n'auront rien d'autre qu'une &eacute;trange \"langueur\" : des sons purement linguistiques et leurs implications &eacute;motionnelles.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>\r\n<p></p>",
        "topics": [
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1743,
                "name": "neural network",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1745,
                "name": "nn~",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1744,
                "name": "prior",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1742,
                "name": "radio play",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27232,
            "forum_user": {
                "id": 27204,
                "user": 27232,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Bildschirmfoto_2023-05-16_um_13.43.10_1.png",
                "avatar_url": "/media/cache/d8/97/d8973a3b6d24849331f08786af566751.jpg",
                "biography": "Felix Römer (*1993) is a Berlin-based composer and pianist, who mainly works in the fields of contemporary music, film music, and improvisation.\n\nHe holds a Bachelor's Degree in Piano\\Jazz from the University of Fine Arts Berlin as well as a Master's Degree in Composition for Screen from Film University Babelsberg KONRAD WOLF. In 2019, he studied with Howard Davidson in the composition department of the Royal College of Music, London. From 2018 to 2019, he studied with Jean-François Zygel in the improvisation department of the Paris Conservatoire (CNSMDP). He took part in numerous masterclasses (with Ensemble Lux:NM, Francesca Verunelli, Helmut Lachenmann, i.a.) and was finalist of several international competitions (e.g. Montreux Jazz Solo Piano Competition 2016). His works have been programmed at numerous festivals, institutions and broadcasting stations, such as IRCAM (Paris), Hamburg Contemporary, Internationale Ferienkurse Darmstadt or Composers Concordance (New York).\n\nHis main musical interests lie in new technologies, the musicality of language, as well as spectral and soundscape-oriented composition, with a particular fascination for pipe organs.",
                "date_modified": "2025-12-03T12:18:30.173017+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 509,
                        "forum_user": 27204,
                        "date_start": "2023-04-14",
                        "date_end": "2024-04-14",
                        "type": 0,
                        "keys": [
                            {
                                "id": 291,
                                "membership": 509
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "fiedert",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "there-are-other-worlds-they-have-not-told-you-of-they-wish-to-speak-to-you",
        "pk": 2720,
        "published": true,
        "publish_date": "2024-02-13T10:07:47+01:00"
    },
    {
        "title": "Système de musique interactive XR (Human-Swarm Interactive Music System) - Pedro Lucas",
        "description": "Ce projet est un système de musique interactive (IMS) qui utilise la réalité mixte (MR) et les technologies audio spatiales pour une session de bouclage multipiste, chaque piste étant représentée par un \"agent\" qui, dans ce cas, est une entité incarnée par une source sonore et un espace visualisé sous la forme d'une sphère virtuelle.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Pedro Lucas<br /><a href=\"https://forum.ircam.fr/profile/pedro-lucas-bravo-gm/\">Biographie</a><br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2171521351a8ca7cac0bea59c7b110bf.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"978\" height=\"603\" /></p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/53a98f8e395753375041a5a881881611.jpeg\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"1238\" height=\"696\" /></p>\r\n<p>&nbsp;</p>\r\n<p>Ce projet est un syst&egrave;me de musique interactive (IMS) qui utilise la r&eacute;alit&eacute; mixte (MR) et les technologies audio spatiales pour une session de bouclage multipiste, chaque piste &eacute;tant repr&eacute;sent&eacute;e par un \"agent\" qui, dans ce cas, est une entit&eacute; incarn&eacute;e par une source sonore et un espace visualis&eacute; sous la forme d'une sph&egrave;re virtuelle. Un agent poss&egrave;de &eacute;galement un comportement autonome qui permet la transformation du mat&eacute;riel musical initialement fourni par un musicien qui peut jouer en temps r&eacute;el avec ce syst&egrave;me, ainsi que son mouvement, en fonction de param&egrave;tres spatiaux provenant de l'interpr&egrave;te et d'autres agents. Comme il s'agit d'une section multipistes, l'interpr&egrave;te peut convoquer autant d'agents que de pistes cr&eacute;&eacute;es, de sorte qu'&agrave; terme, la session comportera un essaim artificiel interagissant avec le musicien.</p>\r\n<p>Il fonctionne de la mani&egrave;re suivante : L'artiste cr&eacute;e une source sonore en jouant une ligne musicale sur un contr&ocirc;leur MIDI de type piano et en modifiant les propri&eacute;t&eacute;s du son &agrave; l'aide de filtres et d'effets par le biais de boutons physiques. Le contr&ocirc;leur central, le Core System, comprend un looper qui enregistre et r&eacute;p&egrave;te ce mat&eacute;riel musical, cr&eacute;ant ainsi une source sonore qui peut &ecirc;tre entendue et vue dans l'espace. Cette source sonore est connue sous le nom d'agent musical et peut &ecirc;tre d&eacute;plac&eacute;e manuellement &agrave; l'aide des capacit&eacute;s de suivi du casque MR. Le syst&egrave;me Spatial Audio permet de cartographier la position de la source sonore sur le r&eacute;seau de haut-parleurs (syst&egrave;me ambisonique), et le casque MR restitue l'agent sous la forme d'une sph&egrave;re color&eacute;e dans l'espace physique.</p>\r\n<p>Gr&acirc;ce &agrave; des gestes sp&eacute;cifiques effectu&eacute;s &agrave; partir du casque MR, l'agent peut &ecirc;tre rel&acirc;ch&eacute; afin qu'il commence &agrave; se d&eacute;placer de mani&egrave;re autonome, en changeant le mat&eacute;riel musical (mais en conservant les propri&eacute;t&eacute;s sonores) de la boucle sur la base d'un algorithme d'apprentissage automatique aliment&eacute; en temps r&eacute;el pendant que l'utilisateur jouait initialement la ligne musicale. Lorsque l'agent est rel&acirc;ch&eacute;, un nouvel agent est instanci&eacute;, ce qui permet &agrave; l'utilisateur de l'initialiser avec une nouvelle boucle, puis de le rel&acirc;cher &agrave; nouveau. Ce processus peut &ecirc;tre r&eacute;p&eacute;t&eacute; plusieurs fois pour g&eacute;n&eacute;rer une session musicale multipiste, chaque piste en boucle &eacute;tant associ&eacute;e &agrave; un agent en forme de sph&egrave;re se d&eacute;pla&ccedil;ant dans l'espace audiovisuel 3D.</p>\r\n<p>Comme les agents se d&eacute;placent librement dans l'espace de repr&eacute;sentation, l'utilisateur peut &eacute;galement se d&eacute;placer dans l'espace physique. L'utilisateur peut attraper les agents lib&eacute;r&eacute;s pour modifier la boucle musicale et les propri&eacute;t&eacute;s du son, puis les rel&acirc;cher &agrave; nouveau dans cette interaction musicale homme-machine.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\"><br />Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1853,
                "name": "extended reality (XR)",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1850,
                "name": "interactive music system",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1854,
                "name": "mixed reality (MR)",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1851,
                "name": "swarm intelligence",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 25407,
            "forum_user": {
                "id": 25380,
                "user": 25407,
                "first_name": "Pedro",
                "last_name": "Lucas",
                "avatar": "https://forum.ircam.fr/media/avatars/fotoPedro3.jpg",
                "avatar_url": "/media/cache/4c/d5/4cd5f97e4fd883c00c067a82e0ff8841.jpg",
                "biography": "I am PhD research fellow at RITMO (Centre for Interdisciplinary Studies in Rhythm, Time and Motion) located at the University of Oslo (UiO) in Norway. My background is in computer science but with a focus on music technology, which I have explored more in depth in a related master's programme at UiO. During my professional life, I have developed real-time systems related to game development and music, contributing to my experience in using game engines and sound synthesis programming languages to implement complex music systems. Lately, I have explored swarm intelligence on interactive music systems (IMS) for physical-virtual environments using technologies such as XR headsets, optical motion capture systems, and spatial audio platforms. My research is currently focused on human-swarm IMS, in which a musician can perform with self-organized and self-synchronized autonomous musical agents in virtual and/or physical setups.",
                "date_modified": "2024-03-10T09:51:46.252005+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "pedro-lucas-bravo-gm",
            "first_name": "Pedro",
            "last_name": "Lucas",
            "bookmarks": []
        },
        "slug": "xr-human-swarm-interactive-music-system-1",
        "pk": 2775,
        "published": true,
        "publish_date": "2024-02-28T11:05:19+01:00"
    },
    {
        "title": "Artificial Dream - Xirui Liao",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p><img alt=\"Artifical Dream_VR scene\" src=\"https://forum.ircam.fr/media/uploads/user/725da6e4826f38753971ba033592f675.png\" /></p>\r\n<p>In a world where gender is becoming increasingly fluid and non-binary, we are left to ponder whether traditional gender inequalities will still exist in the future. As we explore the possibility of a new gender coding system, we question the nature of gender and its impact on society.</p>\r\n<p>\"Artificial Dreams\" is a groundbreaking art project that utilizes virtual reality technology as its primary medium to create an immersive and interactive experience for the audience. The project provides a unique perspective on the operation mechanism of a 3-dimensional gender system, exploring the life experiences and inner world of the main character through a combination of reality and imagination.</p>",
        "topics": [
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1211,
                "name": "narrative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1119,
                "name": "spacial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1108,
                "name": "VR",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 38374,
            "forum_user": {
                "id": 38323,
                "user": 38374,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/evafracal.jpg",
                "avatar_url": "/media/cache/f4/c3/f4c3dabc5f374f5476b5bf2665772c93.jpg",
                "biography": "Xirui Liao is an interdisciplinary designer who focuses on art and science, natural form, and physical interaction. Her practice involves conceptual design, interaction, product, and graphic design, and she is committed to bringing together multiple disciplines to create unique and inspiring experiences. She is currently doing postgraduate studies at the Royal College of Art.",
                "date_modified": "2023-02-07T19:18:31+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "xiruiliao",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "artificial-dream",
        "pk": 2121,
        "published": true,
        "publish_date": "2023-03-08T23:27:43+01:00"
    },
    {
        "title": "Antony Workshop 2026 by Serge Lemouton, Jacques Warnier (CNSMDP), Malena Fouillou (CNMSDP), Xavier Garnier (Logilab)",
        "description": "Pratique de la documentation et de la préservation des œuvres mixtes avec le système Antony\r\nAtelier pour les journées du Forum Ircam mars 2026",
        "content": "<div>\r\n<div>\r\n<div>\r\n<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<h2>Resum&eacute;</h2>\r\n<p>L&rsquo;objectif de cet atelier est de pr&eacute;senter le syst&egrave;me Antony, d&eacute;sormais dans sa version finale et pleinement op&eacute;rationnelle.<br />&Agrave; l&rsquo;issue de cet atelier, les participants seront en mesure d&rsquo;utiliser la base de donn&eacute;es pour documenter, diffuser et pr&eacute;server leurs propres cr&eacute;ations.</p>\r\n<h2>Workshop description</h2>\r\n<p>La plateforme Antony a &eacute;t&eacute; con&ccedil;ue pour permettre le d&eacute;p&ocirc;t d&rsquo;archives num&eacute;riques, accompagn&eacute;es de descriptions d&eacute;taill&eacute;es des contenus et des formats de fichiers, ainsi que la d&eacute;finition et la gestion des droits d&rsquo;acc&egrave;s aux fichiers et &agrave; la documentation associ&eacute;e. Elle int&egrave;gre des fonctionnalit&eacute;s avanc&eacute;es de recherche, de consultation et de r&eacute;cup&eacute;ration des donn&eacute;es, dans le but de garantir l&rsquo;&eacute;tude, le portage, la reprise et la reproductibilit&eacute; d&rsquo;&oelig;uvres musicales.</p>\r\n<p>&Eacute;troitement d&eacute;pendante de la dur&eacute;e de vie de logiciels en constante &eacute;volution et de langages de programmation non compatibles, la pr&eacute;servation d&rsquo;une &oelig;uvre mixte repose souvent exclusivement sur la capacit&eacute; d&rsquo;un nombre restreint d&rsquo;individus &mdash; concepteurs en informatique musicale, compositeurs, ing&eacute;nieurs du son, etc. &mdash; &agrave; mettre &agrave; jour les programmes d&eacute;velopp&eacute;s (patches, code informatique, fichiers de partition, etc.) &agrave; chaque reprise de l&rsquo;&oelig;uvre. La viabilit&eacute; &agrave; long terme de celle-ci se trouve ainsi conditionn&eacute;e &agrave; ses performances. &Agrave; ces difficult&eacute;s inh&eacute;rentes aux &oelig;uvres recourant aux technologies num&eacute;riques s&rsquo;ajoute un enjeu suppl&eacute;mentaire de conservation et de diffusion : &agrave; l&rsquo;exception de la base de donn&eacute;es Sidney (archive num&eacute;rique des &oelig;uvres mixtes cr&eacute;&eacute;es &agrave; l&rsquo;IRCAM), tr&egrave;s peu d&rsquo;initiatives p&eacute;rennes ont permis l&rsquo;organisation et la conservation syst&eacute;matiques des patches, tant &agrave; des fins de sauvegarde que de mise &agrave; disposition pour les artistes et les chercheurs.</p>\r\n<p>Pour ouvrir cet atelier, nous pr&eacute;senterons bri&egrave;vement un panorama des initiatives (existantes ou aujourd&rsquo;hui disparues) relatives aux bases de donn&eacute;es en ligne d&eacute;di&eacute;es &agrave; la cr&eacute;ation musicale exp&eacute;rimentale. Nous comparerons les fonctionnalit&eacute;s qu&rsquo;elles proposent et aborderons la question de leur p&eacute;rennit&eacute; &agrave; long terme.</p>\r\n<p>Antony a &eacute;t&eacute; d&eacute;velopp&eacute; en suivant les standards et les bonnes pratiques du Web s&eacute;mantique. Le projet s&rsquo;appuie sur des normes et technologies &eacute;tablies du Web 3.0, notamment l&rsquo;utilisation de mod&egrave;les formels de repr&eacute;sentation des connaissances, de vocabulaires normalis&eacute;s et de structures <span>de donn&eacute;es interop&eacute;rables, afin de garantir l&rsquo;interop&eacute;rabilit&eacute; s&eacute;mantique, l&rsquo;extensibilit&eacute; et la durabilit&eacute; &agrave; long terme des donn&eacute;es.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<p>La conservation et la diffusion des &oelig;uvres collaboratives de musique mixte soul&egrave;vent d&rsquo;importants enjeux juridiques et de droit d&rsquo;auteur, en raison de la multiplicit&eacute; des contributeurs, de l&rsquo;int&eacute;gration de technologies num&eacute;riques et de la complexit&eacute; technique de ces &oelig;uvres. Chaque composant des archives &mdash; qu&rsquo;il s&rsquo;agisse d&rsquo;enregistrements audio, de fichiers de partition, de patches logiciels ou de scripts interactifs &mdash; peut &ecirc;tre soumis &agrave; des r&eacute;gimes distincts de protection du droit d&rsquo;auteur, &agrave; des conditions de licence sp&eacute;cifiques ou &agrave; des droits moraux. Cette complexit&eacute; est encore accrue lorsque les &oelig;uvres r&eacute;sultent de collaborations entre diff&eacute;rentes institutions, pays ou plateformes num&eacute;riques, impliquant des cadres juridiques potentiellement divergents.</p>\r\n<p>La gestion individualis&eacute;e des permissions d&rsquo;acc&egrave;s de chaque utilisateur aux fichiers joue un r&ocirc;le central dans la prise en compte de ces enjeux. La mise en place de contr&ocirc;les d&rsquo;acc&egrave;s fins permet de garantir que seuls les utilisateurs autoris&eacute;s peuvent consulter, modifier ou redistribuer les ressources num&eacute;riques, contribuant ainsi au respect des accords de licence et &agrave; la protection de la propri&eacute;t&eacute; intellectuelle de l&rsquo;ensemble des contributeurs. Ces droits-utilisateurs permettent &eacute;galement d&rsquo;adapter les niveaux d&rsquo;acc&egrave;s selon les profils &mdash; interpr&egrave;tes, chercheurs, enseignants ou grand public &mdash; tout en conciliant ouverture et respect des aspects juridiques.</p>\r\n<p>Une documentation robuste est essentielle pour assurer le suivi des droits, des attributions et des restrictions d&rsquo;usage. De mani&egrave;re g&eacute;n&eacute;rale, une gouvernance &eacute;ditoriale, juridique et technique rigoureuse est indispensable pour garantir &agrave; la fois la durabilit&eacute; et la diffusion &eacute;thique des &oelig;uvres collaboratives de musique mixte, dans le respect du droit d&rsquo;auteur.</p>\r\n<p>Les participants &agrave; l&rsquo;atelier pourront se cr&eacute;er un compte sur la plateforme Antony et y d&eacute;poser les &eacute;l&eacute;ments constitutifs de leurs cr&eacute;ations artistiques (partitions, patches, enregistrements audio, documentation technique, etc.). Chaque &eacute;l&eacute;ment d&eacute;pos&eacute; pourra &ecirc;tre identifi&eacute; et document&eacute; manuellement. La plateforme assurera le stockage de ces mat&eacute;riaux au sein d&rsquo;une base de donn&eacute;es structur&eacute;e et interop&eacute;rable, garantissant leur pr&eacute;servation &agrave; long terme, leur reproductibilit&eacute; et leur accessibilit&eacute; pour les utilisateurs autoris&eacute;s.</p>\r\n<p>Fort de notre exp&eacute;rience approfondie du repertoires des &oelig;uvres interactives int&eacute;grant des technologies num&eacute;riques, nous pr&eacute;senterons un ensemble de bonnes pratiques pour la documentation des &oelig;uvres de ce type. Ces recommandations porteront notamment sur les standards, les formats de fichiers, les styles de documentation et les pratiques de curation des donn&eacute;es.</p>\r\n<p>Enfin, Antony &eacute;tant une plateforme open source, distribu&eacute;e sous licence LGPL, elle peut &ecirc;tre d&eacute;ploy&eacute;e sous la forme d&rsquo;une instance locale de la base de donn&eacute;es. Les participants qui ne souhaitent pas stocker leurs donn&eacute;es sur la version d&rsquo;Antony h&eacute;berg&eacute;e par le CNSMDP ont la possibilit&eacute; de l&rsquo;installer sur leurs propres ordinateurs. Pour les participants int&eacute;ress&eacute;s par cette approche, nous fournirons des conseils et des instructions pratiques afin de faciliter l&rsquo;installation locale.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<h2>STRUCTURE DE L&rsquo;ATELIER</h2>\r\n<p>a. Pr&eacute;sentation du contexte<br />b. Comparaison avec des bases de donn&eacute;es existantes (telles que Sidney)<br />c. Introduction aux ontologies et th&eacute;saurus utilis&eacute;s<br />d. Discussion des enjeux juridiques li&eacute;s &agrave; la conservation et &agrave; la diffusion des &oelig;uvres collaboratives de musique mixte<br />e. Hands-on</p>\r\n<h2>TH&Eacute;MATIQUES ET ACTIVIT&Eacute;S</h2>\r\n<p>a. Introduction au processus de documentation<br />b. Documentation pratique d&rsquo;un projet personnel existant et int&eacute;gration dans la base de donn&eacute;es<br />c. Installation locale de la base de donn&eacute;es</p>\r\n<h2>TECHNOLOGIES UTILIS&Eacute;ES</h2>\r\n<p>&bull; Python<br />&bull; Web s&eacute;mantique<br />&bull; CubicWeb (Logilab)<br />&bull; PostgreSQL<br />&bull; Ontologies : CIDOC-CRM, FRBRoo/LRMoo, DOREMUS</p>\r\n<h2>PUBLIC CIBL&Eacute; ET ATTENTES</h2>\r\n<p>Cet atelier s&rsquo;adresse principalement aux compositeurs, r&eacute;alisateurs en informatique musicale et interpr&egrave;tes, mais peut &eacute;galement int&eacute;resser des artistes multi-m&eacute;dias, musicologues, documentalistes, &eacute;diteurs de musique, etc.<br />Le nombre de participants est limit&eacute; &agrave; 20.</p>\r\n<p>Les participants sont invit&eacute;s &agrave; venir avec les &eacute;l&eacute;ments li&eacute;s &agrave; un projet artistique personnel existant qu&rsquo;ils souhaitent documenter, &eacute;ditorialiser et pr&eacute;server.</p>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4110,
                "name": "diffusion",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 68,
                "name": "Documentation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2044,
                "name": "preservation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 76,
            "forum_user": {
                "id": 76,
                "user": 76,
                "first_name": "Serge",
                "last_name": "Lemouton",
                "avatar": "https://forum.ircam.fr/media/avatars/deborah.jpg",
                "avatar_url": "/media/cache/eb/52/eb52181309dccd2a20b1dc1b54ef0f67.jpg",
                "biography": "Serge Lemouton\n\nréalisateur en informatique musicale Ircam\n\nAprès des études de violon, de musicologie, d'écriture et de composition, Serge Lemouton se spécialise dans les différents domaines de l'informatique musicale au département Sonvs du Conservatoire national supérieur de musique de Lyon. Depuis 1992, il est réalisateur en informatique musicale à l'Ircam. Il collabore avec les chercheurs au développement d'outils informatiques et participe à la réalisation des projets musicaux de compositeurs parmi lesquels Florence Baschet, Laurent Cuniot, Michael Jarrell, Jacques Lenot, Jean-Luc Hervé, Michaël Levinas, Magnus Lindberg, Tristan Murail, Marco Stroppa, Fréderic Durieux et autres. Il a notamment assuré la réalisation et l’interprétation en temps réel de plusieurs œuvres de Philippe Manoury, dont K…, la frontière, On-Iron, Partita 1 et 2, et l’opéra Quartett de Luca Francesconi.\n\nActuellement, il s’intéresse plus particulièrement à la transmission et la préservation des œuvres du répertoire de l’informatique musicale.",
                "date_modified": "2026-02-27T09:18:37.644467+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 25,
                        "forum_user": 76,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [
                            {
                                "id": 276,
                                "membership": 25
                            },
                            {
                                "id": 563,
                                "membership": 25
                            },
                            {
                                "id": 751,
                                "membership": 25
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "lemouton",
            "first_name": "Serge",
            "last_name": "Lemouton",
            "bookmarks": []
        },
        "slug": "antony-workshop-2026",
        "pk": 4274,
        "published": true,
        "publish_date": "2026-01-28T17:01:36+01:00"
    },
    {
        "title": "iː ɡoʊ weɪ // données artificiellement inintelligibles dada - Jonathan Reus",
        "description": "Live neo-dada AI pour voix(s)",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par: Jonathan Reus<br /><a href=\"https://forum.ircam.fr/profile/jchaime/\">Biographie</a></p>\r\n<p>Cette performance semi-improvis&eacute;e explore des sons vocaux abstraits, dans la tradition de la po&eacute;sie phon&eacute;tique dada&iuml;ste, en suivant un interpr&egrave;te dont la voix et le verbe lui &eacute;chappent peu &agrave; peu. La performance part du texte du po&egrave;me sonore dada&iuml;ste probablement le plus connu, \"Die Sonata in Urlauten\" (Ursonate) de Kurt Schwitters, et d&eacute;veloppe ce texte avec des sons de voix suppl&eacute;mentaires et des fragments de langage coll&eacute;s en collaboration avec un mod&egrave;le de texte pr&eacute;dictif. Au fur et &agrave; mesure que la performance progresse, la voix du performeur s'&eacute;loigne de plus en plus de sa propre voix biologique, augment&eacute;e et transform&eacute;e par des mod&egrave;les de transfert de voix en temps r&eacute;el. Le performeur commence &agrave; jouer &agrave; travers un clone de sa propre voix qui assimile bient&ocirc;t des fragments de la voix de Jaap Blonk, l'un des plus c&eacute;l&egrave;bres po&egrave;tes sonores vivants et performeurs d'Ursonate. La voix augment&eacute;e du performeur progresse, se dissout et fusionne en des formes multi-humaines, polyphoniques, chorales et extraterrestres.</p>\r\n<p>L'int&eacute;r&ecirc;t de cette performance est de c&eacute;l&eacute;brer l'effacement de la voix en tant que marqueur d'identit&eacute;.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\"><strong></strong></a><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong> </strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1809,
                "name": "live coding",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1855,
                "name": "realtime",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 22,
                "name": "Voice",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 614,
                "name": "Traitement vocal",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 197,
                "name": "Voice synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 473,
                "name": "Voice transformation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 30585,
            "forum_user": {
                "id": 30538,
                "user": 30585,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/jcr-cherubclunk-sq-sm_fYxFnFo.png",
                "avatar_url": "/media/cache/9a/53/9a539bb8663ed5a05696aa8bd125a5fe.jpg",
                "biography": "Jonathan Reus is a transdisciplinary musician known for his use of experimental technologies and media-technological concepts in performance. He was born in New York and thereafter lived in Amsterdam and then Florida, where he became involved in the American “new weird” folk-art movement. He later immigrated to the Netherlands and developed a uniquely intimate electronic sound practice combining jazz improvisational approaches with traditional folk elements. He is co-founder of the instrument inventors initiative [iii] in the Hague, Netherlands Coding Live [nl_cl], and a recipient of the W. J. Fulbright Fellowship for his research into hybrid human-machine performance at the former Studio for Electro-Instrumental Music [STEIM] in Amsterdam. Jonathan has received commissions as a composer from Stedelijk Museum, Amsterdam, Slagwerk Den Haag percussion ensemble, and Asko-Schönberg contemporary music ensemble. He is an affiliate of the Intelligent Instruments Lab (Reykjavik) and the Sussex Humanities Lab (Brighton), where he is a PhD candidate in music.",
                "date_modified": "2026-02-07T10:46:48.310956+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jchaime",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "i-o-we-an-artificially-unintelligible-exploration-of-voice",
        "pk": 2777,
        "published": true,
        "publish_date": "2024-03-01T09:58:58+01:00"
    },
    {
        "title": "Imperceptible audio communication",
        "description": "It is now possible to store data in music.",
        "content": "<p>&nbsp;Two researchers at ETH Z&uuml;rich achieved to embed data in music. We are not talking about subliminal messages that could be hidden in the lyrics of a song, but actual data such as WI-FI passwords or phone numbers.</p>\r\n<p>Manuel Eichelberger and Simon Tanner developed a technique that transmits data to a smartphone through a piece of music, without altering the quality of the track. In order to do so, they imagined adding slightly higher and lower frequencies, at a lower volume, to the dominant tones. The result is unnoticeable to the human ear.</p>\r\n<p>This technology could be very useful to coffee shops, hotels or transports, allowing them to transfer encrypted data at a rate of 200 bits per second, which is around 25 letters per second. Imagine getting in a coffee-shop, and being automatically connected to the local WI-FI thanks to the background music.</p>\r\n<p>source : https://tik-old.ee.ethz.ch/file/8a61c16532c1d4f9021d3aaf06f4f381/imperceptible_audio_communication.pdf</p>",
        "topics": [
            {
                "id": 70,
                "name": "Audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 256,
                "name": "Fingerprint",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 258,
                "name": "Metadata",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17668,
            "forum_user": {
                "id": 17664,
                "user": 17668,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/da1d29bfe197a712b11475ec23296c5e?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "louisdesh",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "imperceptible-audio-communication",
        "pk": 290,
        "published": true,
        "publish_date": "2019-07-22T13:59:50+02:00"
    },
    {
        "title": "Banned Sound - Wenkai Pan",
        "description": "Le son interdit est un son imperceptible. Il existe en Chine depuis longtemps et s'accompagne généralement d'activités taboues. Les gens se sentent mal à l'aise, effrayés et bouleversés lorsqu'ils sont entourés de sons interdits.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par:&nbsp;Wenkai Pan&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/wenkai0214/\">Biography<br /><br /></a></p>\r\n<p>Le son interdit est un son imperceptible.</p>\r\n<p>Le caract&egrave;re unique de la g&eacute;ographie et de la culture de l'Asie de l'Est donne naissance &agrave; ce son, et les personnes qui en sont entour&eacute;es manifestent g&eacute;n&eacute;ralement des &eacute;motions instables.</p>\r\n<p>Afin d'offrir au public une exp&eacute;rience immersive, j'ai cr&eacute;&eacute; une s&eacute;rie d'installations g&eacute;ographiques interactives. Le public peut librement explorer une zone fictive, faire l'exp&eacute;rience du son interdit et des histoires cach&eacute;es.</p>\r\n<p></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1893,
                "name": "Geographical walk",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 849,
                "name": "interactive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 53048,
            "forum_user": {
                "id": 52986,
                "user": 53048,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f3eed2c6d9d75bbd476b9ef3c1e5f7d8?s=120&d=retro",
                "biography": "I am Wenkai Pan, a digital designer and artist with strong curiosities about everything new. I enjoy to be narrative with technical approaches, showcasing the reality as I see it and the world within my heart to the audience.",
                "date_modified": "2024-03-13T15:58:44.598228+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "wenkai0214",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "banned-sound",
        "pk": 2830,
        "published": true,
        "publish_date": "2024-03-13T16:02:25+01:00"
    },
    {
        "title": "AI Swing!",
        "description": "Résidence en recherche artistique 2018.19.\r\nRaphaël Imbert et Benjamin Lévy.\r\nEn collaboration avec les équipes Représentations musicales et Analyse des pratiques musicales de l'Ircam-STMS.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">R&eacute;sidence en recherche artistique 2018.19</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p><strong>AI Swing! Analyser et Improvisation / Intelligence artificielle / Cr&eacute;ation et Interdisciplinarit&eacute;.</strong><br />En collaboration avec les &eacute;quipes<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/repmus/\">Repr&eacute;sentations musicales</a><span>&nbsp;</span>et<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/apm/\">Analyse des pratiques musicales</a><span>&nbsp;</span>de l'Ircam-STMS.</p>\r\n<p>Fruit d'une collaboration &agrave; long terme autour d'OMax, ce projet explore trois domaines principaux : Les possibilit&eacute;s d'analyse pour &eacute;tendre les champs d'application et les param&egrave;tres ; l'utilisation ethno-musicale sur les archives et du mat&eacute;riel enregistr&eacute; ; les aspects graphiques de la visualisation.</p>\r\n<p>&Agrave; partir de 2009, Rapha&euml;l Imbert et Benjamin L&eacute;vy ont jou&eacute; avec OMax dans des situations tr&egrave;s nombreuses et vari&eacute;es, du concert aux ateliers p&eacute;dagogiques et conf&eacute;rences scientifiques. Notre curiosit&eacute; pour la recherche scientifique, historique et musicale nous a permis de recueillir un grand nombre de retours d'exp&eacute;riences et d'id&eacute;es &agrave; la fois pratiques et th&eacute;oriques sur la mani&egrave;re de pousser plus loin le domaine de la co-improvisation avec ce syst&egrave;me informatique. Dans ce projet nomm&eacute; AI Swing!, nous avons organis&eacute; ces axes de recherche en trois th&egrave;mes principaux. Bien que le premier objectif &eacute;tait de g&eacute;n&eacute;rer de nouvelles interactions musicales, les principes d'Omax sont tr&egrave;s puissants pour analyser les improvisations de n'importe quelle esth&eacute;tique et de n'importe quelle &eacute;poque. De plus, il est capable de fournir une visualisation et un usage cr&eacute;atif de ces analyses qui se r&eacute;v&egrave;lent tr&egrave;s pertinents. Nous proposons de faire avancer ce sujet, de formaliser et d'am&eacute;liorer la capacit&eacute; d'analyse d'OMax et de ses successeurs vers un usage plus large et plus g&eacute;n&eacute;rique.</p>\r\n<p>Gr&acirc;ce &agrave; la capacit&eacute; d'analyse pr&eacute;c&eacute;demment mentionn&eacute;e d'OMax, &agrave; l'investigation de mat&eacute;riel musical historique comme les premiers enregistrements de Jazz solo, les archives d'anciennes pr&eacute;dications et beaucoup d'autres documents significatifs dans l'histoire musicale, nous pensons que la connaissance interne du syst&egrave;me peut capter des traits essentiels de la structure musicale. Et nous souhaitons approfondir les aspects ethno-musicaux d'un tel outil notamment vers des objectifs historiques et p&eacute;dagogiques.</p>\r\n<p>Enfin, l'aspect visualisation d'OMax ne doit pas &ecirc;tre sous-estim&eacute; &agrave; des fins artistiques, analytiques et p&eacute;dagogiques. Apr&egrave;s de nombreuses ann&eacute;es de volont&eacute; et d'id&eacute;es sur l'utilisation transdisciplinaire d'OMax, nous esp&eacute;rons d&eacute;velopper les possibilit&eacute;s graphiques du syst&egrave;me et le lier &agrave; diff&eacute;rents arts et &agrave; diff&eacute;rentes esth&eacute;tiques. Comme OMax et sa famille d'outils ont &eacute;merg&eacute; de l'&eacute;quipe Repr&eacute;sentations musicales de l'IRCAM et que nous avons gard&eacute; des liens avec cette &eacute;quipe au fil des ann&eacute;es, il est naturel d'apporter ce projet aux chercheurs de cette &eacute;quipe. Cependant, les d&eacute;couvertes historiques et ethno-musicales que nous aimerions formaliser r&eacute;sonneraient particuli&egrave;rement bien avec l'&eacute;quipe Analyse des pratiques musicales et surtout avec le travail de Cl&eacute;ment Canonne sur l'analyse de l'improvisation libre.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Rapha&euml;l Imbert et Benjamin L&eacute;vy</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<figure class=\"person-list-box__image profile\"></figure>\r\n<img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202019/.thumbnails/levy_imbert.jpg/levy_imbert-135x135.jpg\" alt=\"person\" /></div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographies</h3>\r\n<p><strong>Rapha&euml;l Imbert</strong><br />Rapha&euml;l Imbert est n&eacute; &agrave; Thiais le 2 juin 1974. Il apprend &agrave; jouer du saxophone &agrave; l'&acirc;ge de quinze ans en autodidacte, puis entre au conservatoire de Marseille dans la classe de jazz de Philippe Renault.&nbsp; Il y obtient le Premier prix de Conservatoire en 1995 avec Jean-Jacques Elangu&eacute;. Il fut lui-m&ecirc;me assistant professeur dans la classe de jazz du conservatoire de Marseille de 2003 &agrave; 2006.</p>\r\n<p>En 1996 il fonde les groupes Heml&eacute; Orchestra et Atsas-Imbert Consort (&Eacute;mile Atsas (guitare), Vincent Lafont (piano), et Jean-Luc Di Fraya (percussions)), avec lesquels il se produit notamment sur les sc&egrave;nes de Jazz &agrave; Vienne, Nice Jazz Festival, et la Fiesta des Suds &agrave; Marseille. Il cr&eacute;e &agrave; Marseille en 2002 avec des musiciens, sociologues, journalistes, m&eacute;lomanes, le Collectif l&rsquo;Enclencheur, &agrave; la vie &eacute;ph&eacute;m&egrave;re, qui d&eacute;fend un projet de r&eacute;flexion int&eacute;grant la pratique du jazz dans une vision de la soci&eacute;t&eacute; plus globale. En 2003, il est laur&eacute;at du programme &laquo; La Villa M&eacute;dicis Hors les Murs &raquo; pour son travail de recherche sur la musique sacr&eacute;e dans le jazz, r&eacute;alis&eacute; pendant six mois &agrave; New York. D&egrave;s lors, ce s&eacute;jour devient l'&eacute;l&eacute;ment fondateur des compositions de Rapha&euml;l Imbert.</p>\r\n<p>Rapha&euml;l Imbert d&eacute;veloppe un projet p&eacute;dagogique qu&rsquo;il met en pratique au conservatoire de Marseille depuis 2003, ainsi que dans de nombreux s&eacute;minaires, tels que le festival Jazz &agrave; Cluny et la formation des arts de la rue de la Fai&rsquo;art. Il propose en classe de ma&icirc;tre une m&eacute;thode d'improvisation pour ensembles de musique de chambre.</p>\r\n<p>Suite au projet<span>&nbsp;</span><em>Bach - Coltrane</em>&nbsp; avec le Quatuor Manfred et Andr&eacute; Rossi, il collabore r&eacute;guli&egrave;rement avec de nombreux musiciens classiques&nbsp; : Chiara Banchini, Johan Farjot, Arnaud Thorette, Karol Beffa, Jean-Guihen Queyras, Pierre-Olivier Queyras, Genevi&egrave;ve Laurenceau...</p>\r\n<p>Il est membre du Conseil d&rsquo;administration de l&rsquo;Orchestre national de jazz de septembre 2004 &agrave; septembre 2007 et remporte avec son groupe Newtopia Project le grand prix d'orchestre ainsi que le deuxi&egrave;me prix de soliste du 28e Concours national de jazz de la D&eacute;fense en juin 2005. Il a compos&eacute; pour le cin&eacute;ma et la t&eacute;l&eacute;vision pour les projets de Philippe Carrese et Isabelle Boni-Claverie.</p>\r\n<p><strong>Benjamin L&eacute;vy</strong><br />Aujourd&rsquo;hui r&eacute;alisateur en informatique musicale &agrave; l&rsquo;Ircam, Benjamin L&eacute;vy est issu d&rsquo;une double formation sup&eacute;rieure en informatique et musique. Il entretient depuis 2008 une collaboration autant scientifique et technique qu&rsquo;artistique avec plusieurs &eacute;quipes de l&rsquo;Ircam en particulier autour du logiciel d&rsquo;improvisation OMax.</p>\r\n<p>Comme ing&eacute;nieur R&amp;D et d&eacute;veloppeur, il travaille &eacute;galement au sein d&rsquo;entreprises de technologies audio et cr&eacute;atives. En tant que musicien &agrave; l&rsquo;ordinateur, son travail s&rsquo;int&egrave;gre &agrave; des projets artistiques vari&eacute;s dans la musique contemporaine, le jazz, l&rsquo;improvisation libre, le th&eacute;&acirc;tre, la danse. Il a collabor&eacute; notamment avec des chor&eacute;graphes tels qu&rsquo;Aur&eacute;lien Richard, dans le th&eacute;&acirc;tre musical avec Benjamin Lazar et joue r&eacute;guli&egrave;rement avec le saxophoniste de jazz Rapha&euml;l Imbert.</p>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://www.raphaelimbert.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.raphaelimbert.com/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 1651,
                "name": "Improvisation, générativité et interactions co-créatives",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ai-swing",
        "pk": 26,
        "published": true,
        "publish_date": "2019-03-21T16:41:13+01:00"
    },
    {
        "title": "bellplay~: Software and Sound Design in ludus vocalis. By Felipe Tovar-Henao",
        "description": "This presentation explores bellplay~, an open-source software for algorithmic audio developed in Max. It describes the software's script-based architecture based on the bell programming language, and its application in «ludus vocalis», a 25-minute multimedia work where bellplay~ was used for multichannel sound design and partially control visual elements.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"bellplay\" src=\"https://forum.ircam.fr/media/uploads/user/6f591949ea6963722fb3e0fe089be634.png\" width=\"964\" height=\"694\" /></p>\r\n<p>Presented by : Felipe Tovar-Henao</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/felipetovarhenao/\" target=\"_blank\">Biography</a></p>\r\n<p>This presentation introduces<span>&nbsp;</span><em>bellplay~</em>, an open-source software and framework for offline algorithmic audio, developed in Max/MSP within the<span>&nbsp;</span><em>bach</em><span>&nbsp;</span>ecosystem.<span>&nbsp;</span><em>bellplay~</em><span>&nbsp;</span>allows users to manipulate audio using scripts written in the<span>&nbsp;</span><em>bell</em><span>&nbsp;</span>programming language, offering a highly flexible, data-driven approach to algorithmic sound design. By leveraging<span>&nbsp;</span><em>bellplay~</em>, composers and sound designers can implement customizable algorithms for audio synthesis, processing, and analysis.</p>\r\n<p>The lecture will be divided into two parts. First, I will provide an overview of<span>&nbsp;</span><em>bellplay~</em>'s core features and workflow, focusing on its integration with the<span>&nbsp;</span><em>bach</em>,<span>&nbsp;</span><em>dada</em>, and<span>&nbsp;</span><em>ears</em><span>&nbsp;</span>packages. This section will emphasize the framework's script-based architecture, dynamic capabilities, and its seamless interface with the<span>&nbsp;</span><em>bell</em><span>&nbsp;</span>programming language. I will also highlight some<em><span>&nbsp;</span></em>advanced techniques possible in<span>&nbsp;</span><em>bellplay~</em>, such as audio mosaicking, concatenative synthesis, and data-driven sampling.</p>\r\n<p>The second part will present a case study:<span>&nbsp;</span><em>ludus vocalis</em>, a large-scale, fixed multimedia work where<span>&nbsp;</span><em>bellplay~</em><span>&nbsp;</span>played a pivotal role in both the audio and visual elements. I will demonstrate how<span>&nbsp;</span><em>bellplay~</em><span>&nbsp;</span>was used to design and assemble the entirety of the audio/musical material, while also generating control data to influence visuals in TouchDesigner, a popular tool for real-time audiovisual performance. This example illustrates<span>&nbsp;</span><em>bellplay~</em>&rsquo;s versatility, not only as a powerful audio tool but also as a system for multimedia projects.</p>\r\n<p>Through these two sections, attendees will gain a deep understanding of how<span>&nbsp;</span><em>bellplay~</em><span>&nbsp;</span>provides compositional flexibility and technical precision for exploring algorithmic composition and audio processing.<br /><br /><img alt=\"preview of ludus vocalis\" src=\"https://forum.ircam.fr/media/uploads/user/2ae64ccbef180ab1ea8d750d64b5a106.png\" width=\"1259\" height=\"708\" /></p>",
        "topics": [
            {
                "id": 2527,
                "name": "algorithmic audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 669,
                "name": "Bach",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2526,
                "name": "bell",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1125,
                "name": "multimedia",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 7953,
            "forum_user": {
                "id": 7950,
                "user": 7953,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/author_headshot.png",
                "avatar_url": "/media/cache/5f/53/5f533165fbe54973a10eec94546b99f0.jpg",
                "biography": "Felipe Tovar-Henao is a US-based multimedia artist, developer, and researcher whose work explores computer algorithms as expressive tools for human and post-human creativity, cognition, and pedagogy. This has led him to work on a wide variety of projects involving digital instrument design, software development, immersive art installations, generative audiovisual algorithms, machine learning, music information retrieval, human-computer interaction, and more. His music is often motivated by and rooted in transformative experiences with technology, philosophy, and cinema, and it frequently focuses on exploring human perception, memory, and recognition.\n\nHe has held research and teaching positions at various institutions, including as the 2021/22 CCCC Postdoctoral Researcher at the University of Chicago, Lecturer in Music Theory and Composition at Universidad EAFIT, as well as Associate Instructor and Coordinator of the IU JSoM Composition Department. He currently serves as the 2023/25 Charles H. Turner Postdoctoral Fellow in Music Composition at the University of Cincinnati's College-Conservatory of Music.",
                "date_modified": "2026-03-02T21:34:35.083680+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1004,
                        "forum_user": 7950,
                        "date_start": "2016-06-13",
                        "date_end": "2025-11-13",
                        "type": 0,
                        "keys": [
                            {
                                "id": 637,
                                "membership": 1004
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "felipetovarhenao",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "bellplay-software-and-sound-design-in-ludus-vocalis",
        "pk": 3204,
        "published": true,
        "publish_date": "2025-01-10T06:07:32+01:00"
    },
    {
        "title": "Preference for the chromatic sound scale",
        "description": "German version of the essay \"preference for the chromatic sound scale\".\nCopyleft: GPLv3 see-for: http://www.gnu.org/licenses/gpl-3.0.html\n\nWith expressed permission to translate into all other languages",
        "content": "<p>&nbsp;</p>\n<p>== Text-Only ==</p>\n<p>&nbsp;</p>\n<p>Pr&auml;ferenz f&uuml;r die Chromatische Klangskala</p>\n<p>Inhaltsverzeichnis<br />Einf&uuml;hrung 5<br />Das Besondere Wesen der Tonh&ouml;he und die Emanzipation der Klangfarbe 6<br />Pythagoras in der Schmiede 7<br />Pythagoras Einfache Intervalle h&ouml;ren sich sch&ouml;n an 8<br />Pythagoras Zahlenmystik und Serielle-Musik 9<br />Der Logarithmisch Ma&szlig;stab der Empfindung 9<br />Der Logarithmische-Ma&szlig;stab nicht nur der Tonh&ouml;he 10<br />Der Rhythmus 10<br />Musik als Kunst in der Zeit 11<br />Zeitstrukturen nicht nur in der Musik des Menschen 11<br />Der Geh&ouml;rsinn 12<br />Musik ( und Kunst ) als Spiel der M&ouml;glichkeiten 14<br />Darstellung von Strukturen in der Zeit 16<br />Quantenphysik und alles ist Schwingung und Resonanz 17<br />Zwischenbetrachtung zur Zeit-Struktur der Musik 18<br />Musik-Theorie nicht nur f&uuml;r Menschen 18<br />Die Quinte &ndash; prim&auml;r Intervall aller ( Chromatischen ) Musik 18<br />Proportionen und Resonanzen 21<br />Resonanzen bei Mensch Tier und in der Physik 21<br />Andere Metrische Gr&ouml;&szlig;en Andere Musik 22<br />Der Aufbau des Atoms als Beispiel f&uuml;r Dinge anderer Gr&ouml;&szlig;enordnungen 23<br />Die Gr&ouml;&szlig;enordnung der Quantentheorie Zwischen-Betrachtung 25<br />Wie so Gesetze des Denkens erhalten werden k&ouml;nnen 27<br />Ungew&ouml;hnliche Ans&auml;tze f&uuml;r eine Neue-Theorie-der-Musik 27<br />Bedeutung einer vom Menschen unabh&auml;ngigen Theorie der Musik 28<br />Und ihre Bedeutung f&uuml;r die Computer-Musik 28<br />K&ouml;nnen Computer Musikalisch sein 29<br />Doch Computer verstehen die Welt nicht kommen damit aber klar 30<br />Das Wesen der Musik in der &bdquo;Neuen Kunst&ldquo; 31<br />Der Gew&uuml;nschte Effekt 31<br />Die Industrialisierung der Kunst 31<br />Der Betrachter der Kunst als Mit-Gestalter 31<br />Wie sieht nicht Menschliche Kunst aus 31<br />K&ouml;nnte der Betrachter &bdquo;prinzipiell&ldquo; alle Kunst verstehen 32<br />Zu den Proportionen 34<br />Welches aber sind diese Proportionen? 34<br />Andere bekannte Proportionen in der Kunst 34<br />Die einfachsten Proportionen in der Musik 35<br />Auf diese Intervalle Baut die Musik Auf 36<br />Der Faktor 2 in der Musik 37<br />Musik als das Spiel mit dem M&ouml;glichen und seiner Grenzen 39<br />Das Wesen der Harmonischen-Funktion 39<br />Spannung und Umdeutung in der Musik 40<br />Probleme einer Aussage zur Endg&uuml;ltigkeit der Chromatischen-Skala 41<br />Zu den Problemen 41<br />Erreichbarkeit aller Tonh&ouml;hen durch Quinte und Oktave 41<br />Die kleinst stufige Tonh&ouml;hen Skala 43<br />Neue Harmonik neue Skalen 44<br />Probleme mit solchen Geh&ouml;ren Harmonikern 44<br />Die Kunst f&uuml;hrt zu neuen M&ouml;glichkeiten 44<br />Das sch&ouml;ne in Kunst Musik Logik und Mathematik 45<br />Das Wesen der Philosophie ( in diesem Sinne ) 46<br />Das Wesen der Proportionen 48<br />Anfang der Musik aus der Struktur der Proportionen 48<br />Sammlung unseres Ansatzes zu den Proportionen 49<br />Ein neuer Ansatz zur Theorie Harmonischer Schichten in der Musik 49<br />Die M&ouml;glichkeit von Zwischen-Schichten 49<br />Bekannte Harmonische-Strukturen 50<br />Die bekannten Tonarten 50<br />Das Logische Problem der Tonarten 52<br />Musik kann immer als L&ouml;sung eines Logischen Problems verstanden werden 52<br />Musik als in Computer-Sprache formulierte L&ouml;sung eines Problems 53<br />M&ouml;gliche Formen der Niederschrift dieser Probleme 53<br />M&ouml;gliche Aspekte f&uuml;r die Kunst der Musik aus beiden Formen 56<br />Das Improvisieren an sich 57<br />Verschiedene Formen der Form der Musik 58<br />Die Frage der Absoluten Musik 58<br />Das Problem &bdquo;absoluter Musik&ldquo; im Kontext der Intermedialit&auml;t 59<br />Was k&ouml;nnen wir von dieser Zwischenschicht lernen 59<br />&Uuml;bertragbares ?!? 60<br />Asymmetrie bereichert die Harmonik 61<br />Das K&uuml;nstlerische Spiel mit der Harmonik 61<br />Kandinsky &uuml;ber die Regeln des Spiels mit den Parametern in der Kunst 62<br />Die Terz macht den Unterschied zwischen Moll und Dur 64<br />Daraus ergibt sich nun also die Frage: 65<br />Die n&ouml;tige &Uuml;bertragung 65<br />Kandinskys Theorie des Punktes versus Harmonische Unter-Schichten 66<br />Die Frage nach dem rechten Inhalt des Kunstwerkes 68<br />Die Findung einer Idee f&uuml;r ein Kunstwerk 68<br />Das Problem des fehlenden Inhalts 69<br />Zur neuen Harmonischen-Schicht 69<br />Sekunde statt Terz 69<br />Versuch der &Uuml;bertragung der Kadenz auf diese neue Schicht 70<br />&Uuml;ber Schl&uuml;sse und Modulation 70<br />Die kleineren Harmonischen Funktionen 71<br />Mehrere Schichten ein Werk 71<br />Modulationen &uuml;ber verschiedene Schichten 72<br />Spannung durch Modulation 72<br />Musik muss sich in der Zeit Ereignen 73<br />Gibt es &Uuml;berhaupt Musik ohne Zeit 73<br />Neuem Harmonik zum Dritten 74<br />Die Notwendigkeit der Vermittlung von Werken dieser Neuen-Harmonik 74<br />ist eine Unterteilung der Kunst in Unterhaltung und ernst gemeinter Kunst Sinnvoll. 75<br />Techno / Trance als &bdquo;Absolute Musik&ldquo; mit Unterhaltungs-Wert 76<br />Der Ernst zerst&ouml;rt das Kunstwerk 77<br />Kunst ist vielmehr ein Spiel 77<br />&Uuml;ber die Umst&auml;nde der Produktion von Kunst 77<br />Die Aufgabe von Komponist / K&uuml;nstler und Toningeneur / Techniker 78<br />Neue Aufgaben in der &bdquo;Neuen Kunst&ldquo; 78<br />Erweiterung der Aufgaben 79<br />Ein neuer Titel f&uuml;r diese Berufung 80<br />Das Internet Radio von CreCo 80<br />Gr&uuml;nde f&uuml;r diese K&uuml;nstlerische Intervention 80<br />Verweis auf das &bdquo;Subversive Lexikon&ldquo; von CreCo 81<br />Noch einmal Mikrotonale-Musik ( Abschluss ) 81<br />Welche Mikrotonale Musik ich meine und Ablehne 81<br />Mikrotonalit&auml;t ist nicht Musik mit verstimmten Instrumernten 82<br />Mikrotonalit&auml;t entsteht aus feinerer Harmonik 82<br />Das Mathematische Modell der Musik als Kunstwerk 83<br />Verstimmungen auch im besten Nummerischen-Modell 83<br />Musik aus dem Computer als Reinst-Form &bdquo;Absoluter Musik&ldquo; 83<br />Die Beziehung von &bdquo;Computer-Musik&ldquo; zum Begriff der &bdquo;Absoluten Musik&ldquo; 84<br />Schluss Anmerkungen zur berechneten Musik 85<br />Am Beispiel der Ur-Techno Gruppe &bdquo;Kraftwerk&ldquo; 85<br />Die Aussage von &bdquo;Kraftwerk&ldquo; zur Berechneten-Musik 85<br />Schluss Anmerkungen zur Mikrotonalit&auml;t 86<br />Das Problem der falschen Mikrotonalit&auml;t 86<br />Das generelle Paradoxon 86<br />Besch&auml;ftigung mit diesem Paradoxon 86<br />Harmonische Bedeutung statt kleinster Intervalle 87<br />Unm&ouml;glichkeit der Intermedialen &Uuml;bertragung von Kunst auf Basis des Konkreten 88<br />Das Prinzip dass sich im Kunstwerk Realisiert ist zu finden 88<br />Unterschied zwischen Harmonischer-Funktion in Europa und Semiotischer-Bedeutung in Asien 89<br />Die Semiotik des Ger&auml;uschs 91<br />Unterschiede zwischen dem Denken in Asien versus Europa 92<br />Die unterschiedliche Entwicklung von Musik und Kunst in der Geschichte 93<br />Schluss 95<br />Was es aber gibt 95<br />Der Mensch empfindet nicht allein es reagiert auch die Physik 96<br />Also 96<br />Also Harmonische Bedeutung 96<br />Das Subversive der &bdquo;Neuen Kunst&ldquo; 97<br />Das Denken als Manipuliertes Objekt 97<br />Der Weg zur&uuml;ck zu Befreitem Empfinden 98<br />P.S.: 99<br />&Uuml;bertragung auf den Rhythmus 99<br />&Uuml;bertragung zur Seriellen-Musik 99</p>\n<p>Einf&uuml;hrung</p>\n<p>Ein Thema welches mich schon von Kindheit an Besch&auml;ftigt hat, war die Frage warum in allen bisherigen Kompositionen &ndash; Unter Ausnahme der Mikrotonalen Kompositionen ( https://de.quora.com/Was-ist-mikrotonale-Musik | https://www.musiklexikon.ac.at/ml/musik_M/Mikrotonale_Musik.xml | https://de.wikipedia.org/wiki/Spektralmusik ) &ndash; als gr&ouml;&szlig;te Auff&auml;cherung / Unterteilung der Tonh&ouml;hen die so genannte Chromatische Skala ( https://www.wissen.de/lexikon/chromatik-musik | https://p-advice.com/what-is-chromatic-scale | https://de.wikipedia.org/wiki/Chromatik https://de.wikipedia.org/wiki/Farbs%C3%A4ttigung ) genutzt wurde. Und in Gewisser weise heute nur noch Moll und Dur Skalen genutzt werden ( https://de.wiktionary.org/wiki/Dur | https://de.wikipedia.org/wiki/Dur | https://www.gutefrage.net/frage/d-dur--d-moll-beziehung ). Wenn sich auch hier wie gerade bei der Mikrotonalen Musik eine Umgew&ouml;hnung in &ndash; eben &ndash; die Chromatik erfolgt.</p>\n<p>Das Besondere Wesen der Tonh&ouml;he und die Emanzipation der Klangfarbe<br />Dabei sei zu sagen, dass die Tonh&ouml;he &ndash; das kann man der Entwicklung der Musik entnehmen ( https://de.wikipedia.org/wiki/Musiktheorie | http://www.lehrklaenge.de/ | https://de.wikipedia.org/wiki/Kammerton ) &ndash; schon immer der Wesentliche Parameter der Musik ( https://de.wikipedia.org/wiki/Parameter_(Musik) | https://commons.wikimedia.org/wiki/Category:Aspects_of_music?uselang=de | https://www.wissen.de/lexikon/parameter-musik | http://www.gym-raubling.de/medien/Unterricht/Musik/musik_analyse_klasse_10.pdf ) war . Erst in der Neuen Musik haben sich Bestrebungen gezeigt, stattdessen mit der Klangfarbe zu Komponieren ( http://universal_lexikon.deacademic.com/260289/Klangfarbenkomposition | https://ronaldkah.de/klangfarbe-musik/ | https://de.wikipedia.org/wiki/Klangkunst ). Solche Kompositionen sind wie Bilder in denen das ganze Bild nur eine Farbe hat, diese aber so Exakt so mit Textur versetzt ist, dass diese Eine Farbe ausreicht ( https://de.wikipedia.org/wiki/Monochrome_Malerei | http://www.beyars.com/kunstlexikon/lexikon_6073.html | http://architekturundkunst.ml/elina-monochromatische-digitale-malerei-eine-frau-ist-in-schwarz-und-weis-gemalt/ ) um den Wesentlichen Gedanken der Komposition zu vermitteln ( https://www.daskreativeuniversum.de/romantik-kunst/ | https://www.stadt-wand-kunst.de/die-idee/ | https://www.kunstplaza.de/tipps-fuer-kuenstler/44-inspirierende-zitate-fuer-mehr-kreativitaet/ ).</p>\n<p>Pythagoras in der Schmiede<br />Es scheint auf den Pythagoras in seiner Schmiede zur&uuml;ck zu gehen. Und seiner Erfahrung der Kl&auml;nge unterschiedlich schwerer / gro&szlig;er Hammer ( https://physik.cosmos-indirekt.de/Physik-Schule/Pythagoras_in_der_Schmiede | https://www.waldorf-ideen-pool.de/Schule/faecher/physik/klasse-6/versuche-zur-akustik/pythagoras-in-der-schmiede | https://www.wikiwand.com/de/Pythagoras_in_der_Schmiede ), die zur Erfindung des Monochords ( http://arnheiter-m.de/musiktheorie/das-monochord/ | https://www.secret-wiki.de/wiki/Monochord | https://www.deutsches-museum.de/fileadmin/Content/010_DM/020_Ausstellungen/080_Musikinstrumente/030_Workshops/010_Monochord/Das_Monochord_-_Eine_Bauanleitung.pdf ) gef&uuml;hrt haben soll.<br />Pythagoras Einfache Intervalle h&ouml;ren sich sch&ouml;n an<br />Er soll damals auf den Gedanken gekommen sein, dass eine Proportion um so harmonischer ist, als ihre Verh&auml;ltnisse auf einfachen ( M&ouml;glichst kleinen ) Ganzzahlen aufbauen ( https://www.br.de/fernsehen/ard-alpha/sendungen/grundkurs-mathematik/grundkurs-mathematik-mathematik-proportionalitaeten102.html | https://www.daskreativeuniversum.de/proportion-in-der-kunst/ | https://de.wikipedia.org/wiki/Pythagoreer | https://www.lernhelfer.de/schuelerlexikon/mathematik/artikel/pythagoreer ) . <br />Pythagoras Zahlenmystik und Serielle-Musik</p>\n<p>Was im weiteren dazu gef&uuml;hrt haben soll, dass Pythagoras diese Idee der einfachen Proportionen auf fast jedes in Zahlen ausdr&uuml;ckbares Verh&auml;ltnis verallgemeinert hat. Dass man nun in der Seriellen Musik ( https://www.wissen.de/lexikon/serielle-musik | https://schulwiki.koeln/wiki/Serielle_Musik | http://www.frisius.de/rudolf/texte/tx317.htm ) eben dies in der &Uuml;bertragung des Intervall Ansatzes auf alle anderen Parameter der Musik nachvollzogen hat m&ouml;chte ich hier kurz schreiben ( http://www.indiepedia.de/index.php?title=Serielle_Musik | http://www.indiepedia.de/index.php?title=Karlheinz_Stockhausen | https://www.lernhelfer.de/schuelerlexikon/musik/artikel/zufallskompositionen ).<br />Der Logarithmisch Ma&szlig;stab der Empfindung<br />Dabei sollte schon jetzt ausgesprochen / ausgeschrieben sein, dass der Mensch einen Logarithmischen Sinn f&uuml;r den Eindruck der Tonh&ouml;he hat ( https://www.lernhelfer.de/schuelerlexikon/biologie/artikel/tonhoehe-und-lautstaerke | https://de.wikipedia.org/wiki/Tonh%C3%B6he | http://www.sengpielaudio.com/TonhoeheInAbhaengigkeitVomSchallpegel01.pdf ).</p>\n<p>Der Logarithmische-Ma&szlig;stab nicht nur der Tonh&ouml;he<br />&Uuml;bertragt man dies auf andere Parameter der Musik, so kann dies zu gewissen Problemen f&uuml;hren ( https://de.wikipedia.org/wiki/Serielle_Musik | http://www.frisius.de/rudolf/texte/tx318.htm | https://www.zeit.de/2009/43/N-Musik-und-Hirn ).<br />Der Rhythmus<br />Ein Beispiel k&ouml;nnte der Rhythmus ( https://www.helpster.de/unterschied-von-metrum-und-rhythmus_196599 | https://de.wikipedia.org/wiki/Rhythmus_(Musik) | https://blog.landr.com/de/was-ist-rhythmus/ ) darstellen. Hier wird im Grunde mit einer Verschnellerung Accelaration oder einer Verlangsamung dem Riterdanto gearbeitet ( https://de.wikipedia.org/wiki/Tempo_(Musik) | http://www.mozart-tempi.net/4579/52001.html | http://www.musik-steiermark.at/musikkunde/notenlehre/tempo.htm ). Dabei kann aber &ndash; bei n&auml;herer Betrachtung &ndash; gesehen werden, dass auch dies 2er Logarithmen gehorcht ( https://www.musiklexikon.ac.at/ml/musik_Z/Zahlensymbolik.xml | https://de.wikipedia.org/wiki/Zahlensymbolik | https://www.badw.de/fileadmin/pub/akademieAktuell/2008/26/11_Bernhard.pdf ).<br />Musik als Kunst in der Zeit<br />Nun ist aber Musik nichts anderes als eine Struktur die sich ins Konkreta gewandelt in der Zeit ereignet ( https://de.wikipedia.org/wiki/Musik | http://www.philosophie.uni-bremen.de/fileadmin/redak_philo/Papers/Mohr/2012_Mohr_Musik_erlebte_Zeit.pdf | https://www.arte-fact.org/untpltcl/adrnphms.html ). Und Dauer und Schnelligkeit der Ver&auml;nderung in der Zeit die Basis-Gr&ouml;&szlig;en der gesamten Musik ( https://www.lernhelfer.de/schuelerlexikon/physik/artikel/schall-und-musik | https://ateliergaensefuesschen.wordpress.com/2013/12/05/musik-pure-emotion-physikalisch-erklart/ | https://www.lernhelfer.de/schuelerlexikon/musik/artikel/klang-physikalische-aspekte ). Man kann also von diesem Ansatz ausgehend die Serielle Musik herleiten.<br />Zeitstrukturen nicht nur in der Musik des Menschen<br />Nun dieses Prinzip kann auch in der Physik gefunden werden &ndash; hier spricht man einmal von Resonanz ( https://www.lernhelfer.de/schuelerlexikon/physik/artikel/resonanz | https://www.spektrum.de/lexikon/physik/resonanz/12359 | https://wiki.yoga-vidya.de/Resonanz ) und einmal vom Aufbau des Spektrums eines ( musikalischen ) Klanges ( http://www.lehrklaenge.de/PHP/Tonsystem/Obertonreihe.php | https://www.oberton.org/obertongesang/die-obertonreihe/ | https://www.oberprima.com/obertonreihe-partialtonreihe/ ). Um hier ehrlich zu sein, der Aufbau dieses Spektrums kann auf die Erscheinung der Resonanz ( https://gehoerbildung-musiktheorie.de/obertonreihe/ | https://de.wikipedia.org/wiki/Oberton#Obertonreihe | https://physik.cosmos-indirekt.de/Physik-Schule/Oberton ) zur&uuml;ck gef&uuml;hrt werden.<br />Der Geh&ouml;rsinn</p>\n<p>&Uuml;berhaupt scheint es sich beim Menschlichen &ndash; und in gewisser Weise auch Tierischen &ndash; H&ouml;rverm&ouml;gen, um einen etwas abgewandelten Resonator zu handeln ( https://www.einfacher-hoeren.de/de/wie-funktioniert-unser-gehoer__35/ | https://www.kind.com/de-de/magazin/so-hoeren-wir/wie-funktioniert-das-gehoer/ | https://www.amplifon.com/web/ch-de/das-gehoer | https://curdt.home.hdm-stuttgart.de/PDF/Wahrnehmung_von_Musik.pdf | http://www.informatik.uni-ulm.de/ni/Lehre/SS04/HSSH/pdfs/TonhoehenI.pdf ).<br />Was ist eigentlich Klang</p>\n<p>Ein Klang der eigentlich nichts anderes ist, als eine Schwankung &ndash; und damit komplexe Form einer Schwingung ( Man m&ouml;ge daran Denken, dass s&auml;mtliche Periodischen Schwankungen mit der Fourieranalyse ( https://de.wikipedia.org/wiki/Fourier-Analysis | https://www.spektrum.de/lexikon/physik/fourier-analyse/5240 | https://www.spektrum.de/lexikon/mathematik/fourier-analyse/3147 ) auf ein komplexes System von Sinus / Einfachen Schwingungen zur&uuml;ckgef&uuml;hrt werden kann. Und das auch mit der FFT ( https://de.wikipedia.org/wiki/Schnelle_Fourier-Transformation | https://www.fairaudio.de/lexikon/fourier-transformation/ | http://www.sprut.de/electronic/pic/16bit/dsp/fft/fft.htm ) nicht periodische Schwingungen auf wandelnden solchen Systemen zur&uuml;ck gef&uuml;hrt werden k&ouml;nnen ( https://www.praktikumphysik.uni-hannover.de/fileadmin/praktische-physik/AP/Material/Crash_Fourier.pdf | https://de.wikipedia.org/wiki/Fourierreihe | https://www.maths.tcd.ie/pub/HistMath/People/Riemann/Trig/Trig.pdf ) ) sind. <br />Wie der Mensch die Musik empfindet<br />Diese Schwingungen ihrerseits rufen Schwingungen an Gewissen Lamellen im Geh&ouml;r des Menschen hervor. Je besser nun die Resonanz zwischen beiden Schwingungstr&auml;gern ist ( https://www.spektrum.de/lexikon/biologie-kompakt/gehoersinn/4636 | https://www.lernhelfer.de/schuelerlexikon/biologie-abitur/artikel/akustische-sinnesorgane-im-vergleich | https://www.geers.de/rund-ums-hoeren/das-ohr/ ), um so deutlicher wird die betreffende Schwingungsart wahrgenommen. &Auml;hnliches eben passiert auch beim Tiere. <br />Messger&auml;te die Frequenzen messen k&ouml;nnen</p>\n<p>Ja nicht nur dies, wurde dieses Prinzip nicht schon genutzt um am Anfang der Elektrotechnik Maschinen zu konstruieren / Messger&auml;te zu benutzen um Schwingungen zu messen ( http://www.imn.htwk-leipzig.de/~ebersb/elektrotechnik/lehrblatt/lehrblatt16.pdf | https://de.wikipedia.org/wiki/Resonanzfrequenz | https://www.elektrotechnik-fachwissen.de/wechselstrom/schwingkreis.php ).<br />Musik ( und Kunst ) als Spiel der M&ouml;glichkeiten<br />Das andere Fundament ist das Spiel mit den verschiedenen M&ouml;glichkeiten der Mathematik und Logik die das Wessen abstrakter Musik erf&uuml;llen ( http://wwwu.uni-klu.ac.at/hstockha/neu/kunstkreativitaet.pdf | http://www.damanhur.org/de/kunst-und-kreativitat | https://www.halem-verlag.de/wp-content/uploads/2018/06/9783869623245_le.pdf ).<br />M&ouml;glichkeit verschiedener Formen der Logik</p>\n<p>An dieser Stelle m&ouml;chte ich kurz noch sagen, dass es nicht nur eine Logik gibt. Im Forum des Philosophie-Forums habe ich zur Frage was den &Uuml;berhaupt eine Logik ist, einen Thread gestartet ( https://www.philosophie-raum.de/index.php/Thread/28514-Was-ist-eigentlich-eine-Logik/ | https://de.wikipedia.org/wiki/Logik | https://www.phil-fak.uni-duesseldorf.de/philo/geldsetzer/Lo_Bib_08.pdf ).<br />Ein Fragment eines Ansatzes zur Definition von Logik</p>\n<p>Um es hier zu sagen: Logiken sind Systeme die sich selbst rekursiv auslegen. Das Bedeutet es sind Systeme die sagen wie andere Systeme ausgelegt werden sollen / wie aus den S&auml;tzen anderer Systeme weitere Aussagen abgeleitet werden k&ouml;nnen ( https://de.wikipedia.org/wiki/Aussagenlogik | https://www.iep.utm.edu/prop-log/ | https://wirtschaftslexikon.gabler.de/definition/praedikatenlogik-46631 ). Sie machen dies aber auch auf sich selber m&ouml;glich ( https://www.neuronation.de/logik/das-logische-denken | https://de.wikipedia.org/wiki/Formelsammlung_Logik | https://de.wikipedia.org/wiki/Logikgatter ). Im weiteren muss es m&ouml;glich sein jede Logik mit den Mitteln jeder anderen Logik zu beschreiben ( https://de.wikipedia.org/wiki/Klassische_Logik | https://plato.stanford.edu/entries/logic-classical/ | https://de.wikipedia.org/wiki/Nichtklassische_Logik ). Und es muss m&ouml;glich sein, interessierende Fakten &uuml;ber ein betrachteten Sachverhalt in der Logik dieses Sachverhalts &ndash; oder der Gruppe dieser Sachverhalte &ndash; effektiv darzustellen ( http://www.gavagai.de/themen/HHPT31.htm | https://de.wikibooks.org/wiki/Logik | http://www.psiquadrat.de/downloads/hilbert_naturwissenschaft1930.pdf ). <br />Die M&ouml;glichkeit einer generellen Logik</p>\n<p>Was hier gerade bemerkt werden kann ist auch es gibt etwas was all diese Formen von Logiken gemein haben, dies nenne ich die Gottes Logik ( http://kath.net/news/61066 | http://www.kath.net/news/68970 | https://christliche-autoren.de/logik-des-glaubens.html ).<br />Darstellung von Strukturen in der Zeit<br />Die Fourieranalyse</p>\n<p>Auch hier m&ouml;chte ich noch einmal auf das eben eingeklammerte verweisen. Um in der Mathematik vern&uuml;nftig mit Schwingungsvorg&auml;ngen zu rechnen bietet sich die Nutzung der Fourieranalyse an ( https://lp.uni-goettingen.de/get/text/4937 | https://www.eit.hs-karlsruhe.de/mesysto/teil-a-zeitkontinuierliche-signale-und-systeme/spektrum-eines-signals/fourier-reihe/anwendungen-der-fourier-reihe.html | https://www.spektrum.de/lexikon/chemie/fourier-transform-technik/3467 ).<br />Die Fourieranalyse und ihre Anwendungen</p>\n<p>In unseren Tagen hat sich dies l&auml;ngst durchgesetzt, kann man mit dieser Methode / diesem Ansatz doch vereinfacht mit Schwingungssystemen in der Elektronik der Mechanik, Akustik ja selbst der Optik und der Quantenphysik Arbeiten.<br />Quantenphysik und alles ist Schwingung und Resonanz</p>\n<p>L&auml;sst sich zur Quantenphysik doch letztlich sagen, dass alles als die Auswirkung von Schwingungen betrachtet werden kann ( https://dieblauehand.info/quantenphysik-am-ende-ist-alles-schwingung-und-energie/ | https://www.spektrum.de/news/physiker-koppeln-schwingungen-einzelner-atome/1064721 | https://de.wikipedia.org/wiki/Harmonischer_Oszillator_(Quantenmechanik) ) was letztlich zur Stringtheorie ( https://de.wikipedia.org/wiki/Stringtheorie | http://hep.itp.tuwien.ac.at/~kreuzer/strings.html | https://www.stringwiki.org/wiki/String_Theory_Wiki ) gef&uuml;hrt hat.<br />Zwischenbetrachtung zur Zeit-Struktur der Musik</p>\n<p>Das Musik mehr sein kann als dies das Musik Semiotisch Ausgewertete werden kann, werde ich sp&auml;ter noch erkl&auml;ren ( https://www.spin.de/forum/msg-archive/17/2009/11/53170 | http://agis-www.informatik.uni-hamburg.de/WissPro/auditives/archive/M-Musiksemiotik/kapitel3/sprache-musik-2.html | https://de.wikipedia.org/wiki/Semiotik ). Doch sei hier schon einmal auf die so genannte Konkrete Musik ( https://de.wikipedia.org/wiki/Musique_concr%C3%A8te | http://www.frisius.de/rudolf/texte/tx355.htm | http://www.frisius.de/rudolf/texte/index.htm ) verwiesen. <br />Musik-Theorie nicht nur f&uuml;r Menschen<br />Aus diesem Grunde kann es m&ouml;glich sein eine Musik-Theorie zu begr&uuml;nden, die unabh&auml;ngig von der rein subjektiven &Auml;sthetik des Menschlichen H&ouml;rers ist ( https://www.stormbringer.at/storys/505/page1/musik-ohne-menschen.html | https://vistano.com/tierheilkunde/wissen-aus-der-tierwelt/hoeren-tiere-musik/ | https://www.pcwelt.de/a/aiva-die-kuenstliche-intelligenz-komponiert-die-musik-der-zukunft,3450745 ).<br />Die Quinte &ndash; prim&auml;r Intervall aller ( Chromatischen ) Musik</p>\n<p>Es kann n&auml;mlich die Chromatischen Skala auf den Zweiten Schritt dieses Aufbaus ( der Proportionen ) zur&uuml;ck gef&uuml;hrt werden ( http://www.musikurlaub.com/online-gitarrenschule/musiktheorie/intervalle/chromatisch-diatonisch.html | https://www.christian-baehrens.de/system/files/9175/original/Die_Intervalle_-_Ursprung_und_Systematik.pdf?1516050635 | https://musikwissenschaften.de/lexikon/c/chromatisch/ ).<br />Was sind Proportionen</p>\n<p>Proportionen sind Beziehungen &ndash; in diesem Rahmen &ndash; die sich auf der Basis eines Abstractas befinden ( https://www.sem-deutschland.de/inbound-marketing-agentur/online-marketing-glossar/semantik-definition-und-grundlagen/ | https://zeichnen-lernen.net/gestalten/semantik-bedeutung-inhalt-273.html | https://www.gutefrage.net/frage/unterschied-semantik-und-syntax ).<br />Beispiele f&uuml;r Zahlen Bausteine der Proportionen</p>\n<p>Ein kleines Beispiel: Eine einfache Zahl ist ein Konkreta, sie kann durch einen Vektoren dargestellt werden ( https://www.lernhelfer.de/schuelerlexikon/mathematik-abitur/artikel/darstellung-von-vektoren | https://www.matheretter.de/wiki/vektoren/lineare-algebra | https://www.matheretter.de/wiki/vektoren-darstellung ), also durch eine gewisse gezeichnete Linie. Ein System solcher Linien, kann eine Geometrische Gestalt darstellen ( https://de.wikipedia.org/wiki/Vektor | https://fabulierer.de/vektorrechnung-fuers-abitur/ | https://www.mathe-seite.de/oberstufe/analytische-geometrie/grundlagen/ | https://www.mathe-lerntipps.de/mathe-abitur/analytische-geometrie/ | https://www.salierblog.de/was-ist-eine-vektorgrafik/ ). <br />Die einfachen Manipulationen von Zahlen als Vektoren</p>\n<p>Es gibt nun zwei &ndash; uns inFifthteressierende &ndash; Ver&auml;nderungen dieser Gestalt, die die eigentliche Form der Gestalt nicht &auml;ndern. Dies ist die Skalierung ( https://www.ingenieurkurse.de/hoehere-mathematik-analysis-lineare-algebra/vektorrechnung/einfuehrung-in-die-vektorrechnung/skalieren-von-vektoren.html | https://de.wikipedia.org/wiki/Skalierung_%28Computergrafik%29 | https://cc.bingj.com/cache.aspx?q=Skalierung+in+der+Vektor-Grafik&amp;d=4918751301735074&amp;mkt=de-DE&amp;setlang=de-DE&amp;w=XN1CZQRhK1a8ylHs8J5JZG8THYBF2yOQ ) und die Verschiebung ( https://www.mathelounge.de/243311/wie-verschieben-mit-vektoren | https://www.mathebibel.de/verbindungsvektor | http://userpage.fu-berlin.de/decarmen/einfuehrung_vektoren.pdf ). Skalierungen betrachten wir als Ver&auml;nderungen der Abstracta ( https://de.wikipedia.org/wiki/Abstraktum | https://de.wikipedia.org/wiki/Abstract | https://de.wikipedia.org/wiki/Abstraktion ), Verschiebungen als Ver&auml;nderung in der Konkreta ( https://de.wikipedia.org/wiki/Konkretum | https://www.wissen.de/fremdwort/konkret | https://de.wikipedia.org/wiki/Konkrete_Kunst ). W&uuml;rde man einen Baum skalieren bliebe er ein echter Baum an seinem Ort &ndash; vielleicht ein B&auml;umchen &ndash; w&uuml;rden wir ihn Verschieben, w&uuml;rde er sogar, dieser eine Baum an einem anderen Ort bleiben.<br />Proportionen und Resonanzen<br />Resonanzen bei Mensch Tier und in der Physik</p>\n<p>Das nun aber Resonanzen auf ganzzahligen Verh&auml;ltnissen aufbauen, ist eben nicht eine rein Menschliche Subjektivit&auml;t sondern ein Faktum der Physik in den von Menschen als ihre Umwelt verstandenen Gr&ouml;&szlig;enordnung ( https://uwudl.de/forum/weiterfuehrende-themen-philosophie-theologie-etc/1658-leben-auf-insel-n-zwischen-quanten-und-relativitaetstheorie.html | https://de.wikipedia.org/wiki/Relativit%C3%A4tstheorie | https://de.wikipedia.org/wiki/Quantenphysik ).<br />Andere Metrische Gr&ouml;&szlig;en Andere Musik<br />Das nun Abweichungen beim Verlassen dieser Gr&ouml;&szlig;enordnung m&ouml;glich sind, wird sp&auml;testens im Bereich der Quantenphysik klar.<br />Die Gr&ouml;&szlig;enordnung der Quantenphysik</p>\n<p>Gibt es hier dann doch so Merkw&uuml;rdige Effekte, wie den gebrochenen und in gemischten Br&uuml;chen zu beschreibenden Symmetrien ( https://www.faz.net/aktuell/wissen/physik-mehr/quantenteleportation-hyperfein-und-verschraenkt-1622816.html | http://www.thur.de/philo/project/qt.htm | https://www.faz.net/aktuell/wissen/physik-mehr/quantenphysik-ein-leuchtendes-beispiel-fuer-spuk-1212168.html ). Hier w&auml;re es m&ouml;glich Resonanzen in anderen Verh&auml;ltnissen zu finden als den Nat&uuml;rlichen.<br />Versteht der Betrachter solche Musik noch</p>\n<p>Obschon man dann doch darauf hinweisen sollte, dass solche Proportionen zu einer Musik f&uuml;hren k&ouml;nnen, die nicht mehr vom Menschlichen H&ouml;rer nachvollzogen werden kann ( http://www.musikurlaub.com/online-gitarrenschule/klassische-gitarre/moderne/musikgeschichte.html | https://www.wissen-digital.de/Moderne_(Musik) | https://www.dw.com/de/verbotene-kl%C3%A4nge-im-ns-staat/a-16834460 ).<br />Ist Otto-Normal f&uuml;r solche Musik zu gewinnen</p>\n<p>Der Mensch mit seiner Logik des Alltags ist solchen Dimensionen nicht entsprechend geartet. Das aber dem H&ouml;rer die M&ouml;glichkeit zum Nachtvollzug gegeben werden k&ouml;nnen muss, da sonst die Musik zur Kakophonie entartet sollte klar sein ( https://de.wikipedia.org/wiki/Musikvermittlung | https://www.freitag.de/autoren/rmatern/zeitgenoessische-klassische-musik | https://www.bundesregierung.de/breg-de/bundesregierung/staatsministerin-fuer-kultur-und-medien/kultur/kunst-kulturfoerderung/foerderbereiche/musikfoerderung/musik-318078 ).<br />Der Aufbau des Atoms als Beispiel f&uuml;r Dinge anderer Gr&ouml;&szlig;enordnungen<br />Ein Beispiel um dies hier zu illustrieren kann der Aufbau des Atoms sein. Dabei stellt sich ja die Frage; Wie sieht ein Atom aus?<br />Niemand kann Atome sehen<br />Nun die Antwort auf diese Frage ist so einfach wie erschreckend.</p>\n<p>Mangel der Optik f&uuml;r das kleinste <br />Ein Atom sieht gar nicht aus ( https://www.ds.mpg.de/117271/02 | https://www.wissen.de/bildwb/wie-macht-man-atome-sichtbar | https://www.zeit.de/1966/38/atome-werden-sichtbar ). Den Aussehen w&uuml;rde die Interaktion von Licht mit dem Atom bedeuten. Diese Erfolgt aber im Makro-Kosmos des Menschen anders &ndash; Reflexion an Oberfl&auml;chen ( http://www.pflichtlektuere.com/08/11/2013/wissenswert-wie-sehen-wir-eigentlich/ | https://www.planet-wissen.de/natur/sinne/sehen/index.html | http://www.biologie-schule.de/sehen-visuelle-wahrnehmung.php ) &ndash; als auf Ebene des Atoms &ndash; Beugung ( https://www.leifiphysik.de/kern-teilchenphysik/teilchenphysik/grundwissen/die-vier-fundamentalen-wechselwirkungen | https://www.leifiphysik.de/kern-teilchenphysik/teilchenphysik/grundwissen/elektromagnetische-wechselwirkung | https://physik.cosmos-indirekt.de/Physik-Schule/Fundamentale_Wechselwirkung ) . Da nun die Grunds&auml;tzliche Eigenschaft des Aussehens ermangelt, kann von dem Aussehen eines Atoms gar nicht gesprochen werden.<br />Welche Vorstellung vom Atom wir trotzdem haben<br />Was wir vor Augen haben wenn wir an ein Atom denken &ndash; eben zum Beispiel dem Rutherfordschen Atom-Modell ( https://www.lernhelfer.de/schuelerlexikon/physik-abitur/artikel/rutherfordsches-atommodell | https://de.wikipedia.org/wiki/Rutherfordsches_Atommodell | http://www.ffn.ub.es/luisnavarro/nuevo_maletin/Rutherford%20(1911),%20Structure%20atom%20.pdf ) &ndash; ist eine Interpretation von Messdaten ( https://de.wikibooks.org/wiki/Atommodelle:_Geschichte | https://www.grund-wissen.de/physik/atomphysik/atommodelle.html | https://de.wikipedia.org/wiki/Liste_der_Atommodelle ) &ndash; die ihrerseits zu Abstrakt sind um stringent zu einem Aussehen zu f&uuml;hren. <br />Wie diese Vorstellung von der Quantentheorie &Uuml;berholt wurde<br />Und dieses Modell musste im Rahmen der Quantentheorie sogar letztlich Aufgegeben werden ( https://www.lernhelfer.de/schuelerlexikon/physik-abitur/artikel/quantenmechanisches-atommodell | https://physikunterricht-online.de/jahrgang-12/quantenmechanisches-atommodell/ | https://www.lernhelfer.de/schuelerlexikon/chemie-abitur/artikel/das-quantenmechanische-atommodell ).<br />Die Gr&ouml;&szlig;enordnung der Quantentheorie Zwischen-Betrachtung<br />&Auml;hnlich nun ist es mit anderen Analogien zwischen dem Makrokosmos des Menschen und dem Mikrokosmos der Quantenphysik. <br />Hier k&ouml;nnen wir noch Rechnen aber nicht mehr Denken<br />Was darin gipfelt, dass der Mensch zwar in diesem Mikrokosmos noch rechnen &ndash; und damit operieren kann &ndash; sich aber nichts mehr bildliches unter den dabei verwendeten Daten vorstellen kann ( https://www.focus.de/wissen/mensch/naturwissenschaften/quantenphysik-endlich-verstanden-deshalb-kann-ein-objekt-an-zwei-orten-gleichzeitig-sein_id_4352630.html | https://www.zeit.de/wissen/2017-12/wissenschaft-quantenphysik-schicksal-vorherbestimmung-naturforscher-podcast | http://scienceblogs.de/hier-wohnen-drachen/2013/01/05/quantenzustaende/ ).<br />Allt&auml;gliche Begriffe verlieren ihre gew&ouml;hnliche Bedeutung</p>\n<p>Und letztlich verlieren in diesen Dimensionen Begriffe wie Materie, Energie und Kraft-Felder ihre urspr&uuml;ngliche allt&auml;gliche Bedeutung ( http://www.genius.co.at/index.php?id=411 | http://www.faszinierende-welt.com/quantenphysik-energie/ | https://www.wasistwas.de/archiv-wissenschaft-details/max-plancks-quantentheorie.html ).<br />Zum Beispiel des Begriffs der Materie ( des Stoffes )<br />Letztlich kann man sogar sagen, dass zum Beispiel der Sinn von Materie durch den Sinn des Kraftfeldes &ndash; von miteinander kommunizierenden Kraftfelder &ndash; ersetzt wird ( https://www.sein.de/es-gibt-keine-materie-nur-wellen-warum-der-raum-das-universum-bestimmt/ | https://www.manifestation-boost.de/max-plancks-gr%C3%B6%C3%9Fte-erkenntnis-es-gibt-keine-materie/ | https://www.spektrum.de/news/was-ist-wirklich-real/1365934 ).<br />Wie so Gesetze des Denkens erhalten werden k&ouml;nnen<br />Und dies auch der Ausweg aus dem Monaden Problem von Leibnitz darstellt, was aber auszuf&uuml;hren endg&uuml;ltig den Rahmen dieser Essay sprengen w&uuml;rde ( https://de.wikipedia.org/wiki/Monade_(Philosophie) | https://www.textlog.de/6446.html | https://www.philosophie.phil.uni-erlangen.de/qualitaet/arbeitsmittel/HABsp3_Leibniz.pdf | https://www.hermetik-international.com/de/mediathek/historische-schriften-der-mystik/gottfried-wilhelm-leibniz-die-monadologie/ ).<br />Ungew&ouml;hnliche Ans&auml;tze f&uuml;r eine Neue-Theorie-der-Musik<br />An dieser Stelle m&ouml;chte ich erst gar nicht auf die M&ouml;glichkeiten zu sprechen kommen, Musik nach Regeln der Fraktale / der Chaostheorie zu erschaffen ( https://www.mpg.de/9369189/fraktal-musik-jeff-porcaro | http://www.theflutist.org/Fractals_in_Music.html | https://www.scinexx.de/news/technik/musik-enthaelt-versteckte-fraktale/ ). <br />Bedeutung einer vom Menschen unabh&auml;ngigen Theorie der Musik<br />Und ihre Bedeutung f&uuml;r die Computer-Musik<br />Gerade die Frage nach der Unabh&auml;ngigkeit einer Musik-Theorie vom Subjektiven des menschlichen Genie&szlig;ers, ist Bedingung einer Theorie der Computer Musik. <br />Was k&ouml;nnte es Bedeuten Musik / Computer</p>\n<p>Als einer Musik nicht aus der Konserve des Computers f&uuml;r zu Unterhaltenden Menschen ( https://de.wikipedia.org/wiki/Computermusik | http://www.roglok.net/wp/wp-content/uploads/2008/06/digital_vintage_thesis.pdf | http://www.eugenstaab.com/index-Dateien/docs/BenjaminKempe_EugenStaab-LejarenHiller.pdf ), sondern auch f&uuml;r eine Musik gemacht &ndash; in Auseinandersetzung gemacht &ndash; f&uuml;r und mit Computer Systemen ( http://www.indiepedia.de/index.php?title=Computermusik | http://www.indiepedia.de/index.php?title=Algorithmische_Komposition | http://www.indiepedia.de/index.php?title=Live-Elektronik | http://www.indiepedia.de/index.php?title=Live-Coding ).<br />Was ja eine der Maximen des Manifest der &bdquo;Neuen Kunst&ldquo; ist.<br />K&ouml;nnen Computer Musikalisch sein</p>\n<p>Und dies scheint mir ein wichtiger Punkt zu sein, der sich stellt wenn man sich fragt: K&ouml;nnen Computer Musik erschaffen &ndash; k&ouml;nnen sich Computer mit Musik auseinandersetzen. ( https://www.pcwelt.de/ratgeber/Kuenstliche-Intelligenz-in-der-Musik-10572713.html | https://www.zeit.de/digital/internet/2017-12/kuenstliche-intelligenz-musik-produktion-melodrive/seite-2 | https://www.delamar.de/fun/kuenstliche-intelligenz-41718/ )<br />Was ich Vorl&auml;ufig nicht meine<br />Wenn ich nun hier an dieser Stelle von der Erschaffung von Musik durch den Computer spreche, so gehe ich nat&uuml;rlich nicht zwingend von einem Weltverst&auml;ndnis des Computers aus ( https://www.netzwoche.ch/news/2018-02-15/warum-ki-musik-komponieren-aber-keine-buecher-schreiben-kann | https://www.spiegel.de/wissenschaft/mensch/kuenstliche-intelligenz-wenn-der-computer-versteht-was-er-liest-a-1189094.html | https://www.handelskraft.de/2018/05/kuenstliche-intelligenz-verstehen-ein-muss-ist-5-lesetipps/ ). <br />Und doch k&ouml;nnen Computer schon Auto-Fahren<br />Musik in diesem Rahmen zu machen &auml;hnelt doch eher dem autonomen Autofahren durch unsere Rechenknechte. In unseren Tagen laufen gerade Versuche damit, Autos von Computern fahren zu lassen ( https://de.wikipedia.org/wiki/Selbstfahrendes_Kraftfahrzeug | https://autonomos.inf.fu-berlin.de/ | https://www.pkw.de/ratgeber/autonews/selbstfahrende-autos ). <br />Doch Computer verstehen die Welt nicht kommen damit aber klar<br />Und doch geh&ouml;rt eben dazu kein Weltverst&auml;ndnis. Der Computer braucht letztlich nicht zu Wissen warum er von A nach B fahren soll, er muss nur wissen, dass er dies soll.<br />Was ist dann Musik mit durch und f&uuml;r Computer</p>\n<p>Somit w&auml;re eine Computer Musik ein Automatisch Erzeugter Text der zwar allen Regeln der Rechtschreibung und der Zeichensetzung erf&uuml;llt, der aber eigentlich nur eine Mechanische Linearisierung eines in Form eines Semantischen Netzes Codierten Beschreibung eines Sachverhalts entspricht ( http://www.wolfgang-wahlster.de/wordpress/wp-content/uploads/Natuerlichsprachliche_KI-Systeme_Entwicklungsstand_und_Forschungsperspektive.pdf | http://www.wolfgang-wahlster.de/wordpress/wp-content/uploads/Natuerlichsprachliche_Systeme_Einfuehrung.pdf | https://wiki.infowiss.net/Informationssystem ).<br />Das Wesen der Musik in der &bdquo;Neuen Kunst&ldquo;<br />Der Gew&uuml;nschte Effekt<br />Die Industrialisierung der Kunst<br />Das es gerade aber um diesen Effekt geht, wenn ich von der Industrialisierung der Kunst im Rahmen der von mir so genannten &bdquo;Neuen Kunst&ldquo; spreche, sei versichert ( https://de.wikipedia.org/wiki/Multiple | http://multipleart.net/ | http://www.gerisch-stiftung.de/ausstellung/multiple-art-und-serielle-unikate ).<br />Der Betrachter der Kunst als Mit-Gestalter<br />Letztlich geht es in der &bdquo;Neuen Kunst&ldquo; dem Betrachter zu erm&ouml;glichen mit der Zutat des &bdquo;Produktes&ldquo; seine pers&ouml;nliche Aussage zur Kunst zu bringen ( https://de.wikipedia.org/wiki/Kunst ).<br />Wie sieht nicht Menschliche Kunst aus</p>\n<p>Und Schlussendlich mit der Frage: Wie s&auml;he Musik nicht von dieser Erde aus &ndash; Und: K&ouml;nnten wir musikalisch mit extraterrestrischem Leben Musik austauschen &ndash; &Uuml;berhaupt mit solchem Kommunizieren ( https://www.fr.de/rhein-main/main-taunus-kreis/hattersheim-ort87439/ausserirdische-kunst-10996816.html | https://www.hna.de/kultur/documenta/ausserirdische-kunst-documenta-portikus-frankfurt-zeigen-einen-meteoriten-936188.html | http://scienceblogs.de/astrodicticum-simplex/2017/06/09/sternengeschichten-folge-237-ausserirdische-kunst/ ).<br />K&ouml;nnte der Betrachter &bdquo;prinzipiell&ldquo; alle Kunst verstehen<br />Und hier ist zu beachten, dass nicht alles Fremde &uuml;berhaupt vom Menschen verstanden werden kann. <br />Noch einmal zur Quantentheorie</p>\n<p>Die Sub-Ebene der Quantenphysik &ndash; die Ebene unter den Plankschen Gr&ouml;&szlig;en ( https://www.chemie.de/lexikon/Planck-Einheiten.html | https://physik.cosmos-indirekt.de/Physik-Schule/Planck-Einheiten | http://www.joergresag.privat.t-online.de/mybkhtml/startbk.htm ) &ndash; muss unserem Verst&auml;ndnis von Logik immer unzug&auml;nglich bleiben ( https://abenteuer-universum.de/diverses/planck.html | http://unendliches.net/german/index.htm?planckeinheiten.htm | https://de.wikipedia.org/wiki/Klassische_Physik ).<br />Das Problem der Kontingenz in der Lehre von der Wissenschaft<br />Und dies f&uuml;hrt uns letztlich zur Kontingenz ( https://www.philosophie-raum.de/index.php/Thread/28657-Kl%C3%A4rrung-des-Begriffes-Kontingent/ | https://de.wikipedia.org/wiki/Kontingenz_(Philosophie) | https://uni-tuebingen.de/fileadmin/Uni_Tuebingen/Fakultaeten/PhiloGeschichte/Dokumente/Downloads/ver%c3%b6ffentlichungen/heidelberger/Die_Kontingenz_der_Naturgesetze_bei_Imile_Boutroux_final.pdf ) s&auml;mtlicher Empirischer Erkenntnis. Der Ansatz der Induktion ( https://www.spektrum.de/lexikon/philosophie/induktion/964 | https://philo-wiki.de/induktion | https://www.philoclopedia.de/was-kann-ich-wissen/erkenntnistheorie/induktionsproblem/ ) &ndash; das ein System sich verh&auml;lt wie das System es schon wiederholte mahle vorher getan hat &ndash; erm&ouml;glicht es &Uuml;berhaupt erst von so etwas wie Erfahrung zu reden. Ohne diesen Glauben kann nichts Gewusst Werden ( https://de.wikipedia.org/wiki/Induktion_(Philosophie) | https://plato.stanford.edu/entries/probability-interpret/ | http://www.princeton.edu/~harman/Papers/REC-Rev.pdf | https://www.iep.utm.edu/conf-ind/ ).<br />Statistiken als einziger prim&auml;r Inhalt von wissenschaftlichen Aussagen<br />Nat&uuml;rlich sagt dies &uuml;berhaupt nichts &uuml;ber die M&ouml;glichkeit von Statistiken zwischen Aktio und Reaktio aus. Nur eben, dass solche Statistiken dann jenseits jeglicher Aussagekraft liegen ( http://www.betriebswirtschaft-lernen.net/erklaerung/induktion/ | https://www.neuronation.de/science/was-bedeutet-deduktives-und-induktives-denken | https://user.phil.hhu.de/~cwurm/wp-content/uploads/2018/07/skript-induktive-logik-2.pdf ).<br />Zu den Proportionen<br />Welches aber sind diese Proportionen?<br />Bei den Proportionen geht es hier immer um Kombinationen zwischen einfachsten Zahlen ( https://de.wikipedia.org/wiki/K%C3%B6rperproportion | https://de.wikipedia.org/wiki/Proportion_(Architektur) | https://de.wikipedia.org/wiki/Proportionalit%C3%A4t ).<br />Andere bekannte Proportionen in der Kunst<br />Nat&uuml;rlich gibt es in der &Auml;sthetik und der Physik noch eigentlich komplexere Zahlen-Paare. Die nat&uuml;rlichste dieser Proportionen in der Physik und der Mathematik / Geometrie ist die Kreiszahl &lt;Pi&gt; ( https://matheguru.com/allgemein/die-kreiszahl-pi.html | https://de.wikipedia.org/wiki/Kreiszahl | https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Irrationalit%C3%A4t_und_Transzendenz#Die_Kreiszahl_%CF%80_ist_irrational | https://de.wikibooks.org/wiki/Beweisarchiv:_Algebra:_K%C3%B6rper:_Transzendenz_von_e_und_%CF%80 ). Diese Zahl mit einem Wert von ungef&auml;hr 3.1415&hellip; steht der eher &auml;sthetischen Zahl von 2^(1/2) ( https://de.wikipedia.org/wiki/Wurzel_2 | https://mathworld.wolfram.com/PythagorassConstant.html | https://de.wikipedia.org/wiki/Engel-Entwicklung ) und dem so genannten Goldenen-schnitt ( http://www.rdklabor.de/w/?oldid=90817 | https://de.wikipedia.org/wiki/Goldener_Schnitt | http://www.golden-section.eu/ ) gegen&uuml;ber. <br />Die Funktion dieser Proportionen<br />Auf die Funktion dieser Rationen werde ich sp&auml;ter noch kommen ( https://www.lernhelfer.de/schuelerlexikon/kunst/artikel/proportion-und-goldener-schnitt | http://www.kunst-malerei.info/proportionen.html#.XmqY-tlCeUk | https://de.wikipedia.org/wiki/Kreis | https://symbolonline.de/index.php?title=Kreis | https://www.derkleinegarten.de/mehr-infos-bilder/symbollexikon/kreis-kranz-rad-uroboros.html ). <br />Die einfachsten Proportionen in der Musik<br />Nun kommen wir zu den so genannten einfachen Proportionen. <br />Die Oktave<br />Zun&auml;chst einmal das Verh&auml;ltnis 1:2 also das Intervall der Oktave ( https://www.klamm.de/schlaufuchs/was-ist-eine-oktave-3596.html | https://www.theorie-musik.de/intervalle/ueber-der-oktave/ | https://de.wikipedia.org/wiki/Oktave ) . <br />Die Quinte</p>\n<p>Und das Verh&auml;ltnis der Quinte 2:3 ( https://de.wikipedia.org/wiki/Quinte | http://www.brefeld.homepage.t-online.de/tonsysteme.html | https://musikanalyse.net/tutorials/quinte/ ).<br />Einfach ???<br />Beide sind sowohl einfach nach Pythagoras als sie auch dem Erfordernis der physikalischen Resonanz zwischen schwingungs-f&auml;higen Medien und deren Schwingungen ( https://de.wikipedia.org/wiki/Quint-Oktav-Klang | https://musikwissenschaften.de/lexikon/o/oktave/ | https://musikwissenschaften.de/lexikon/q/quinte/quinte-1882/ ).<br />Diesem zum Beispiel widersprechen sowohl die Kreis Proportion &lt;Pi&gt; als auch der goldene Schnitt und die Wurzel aus 2.</p>\n<p>Auf diese Intervalle Baut die Musik Auf<br />Der Aufbau der Chromatischen-Skala aus diesen Intervallen<br />Geht man n&auml;mlich hin und reiht Quinte an Quinte zu immer gr&ouml;&szlig;eren Intervallen, so erreicht man jede Klangstufe ( jeden so genannten Chromatischen Klang ) nur eben &uuml;ber mehrere Oktaven Verteilt ( https://www.brass-online.de/quintenzirkel.htm | https://www.bonedo.de/artikel/einzelansicht/quintenzirkel-einfach-erklaert.html | https://einfach-musik.de/portfolio/abgezirkelt-am-runden-tisch/ ). Dies macht also n&ouml;tig durch &ndash; so genannte &ndash; Oktavierung ( https://de.wikipedia.org/wiki/Oktavierung | https://432hzpro.com/oktave-oktavierung/ | https://www.enzyklo.de/Begriff/Oktavierung ) diese Stufen in einer Oktave zusammen zu fassen. Oktavierung kann dabei als das nat&uuml;rlichste betrachtet werden, da zum einen die Oktave als 1tem dem 2ten der Quinte gegen&uuml;ber tritt.<br />Der Faktor 2 in der Musik</p>\n<p>Denn sowie der Faktor 2 ( https://vorteilhaftwebsite.com/die-bedeutung-der-zahl-2-numerologie-und-zahlenmystik/ | https://www.ewigeweisheit.de/geheimwissen/numerologie/zahlenmystik/die-zwei-2 | https://vorteilhaftwebsite.com/die-bedeutung-der-zahl-222-numerologie-und-zahlenmystik/ ) sowohl die Vollkommenheit der Resonanz bei unterschiedlichen Frequenzen bedeutet.<br />Unterschied zwischen relativer Tonh&ouml;he und Oktav-Lage<br />So kommt es aufgrund dieser fast absoluten Resonanz dazu, dass der Menschliche H&ouml;rer eher ein C mit einem C&lsquo; verwechselt &ndash; also zwei gleichnamige Tonh&ouml;hen unterschiedlicher Oktav-Lage &ndash; als ein C mit einem D &ndash; also zwei Unterschiedliche Tonh&ouml;hen in der gleichen Oktave gelegen ( https://www.piano-akkorde.de/images/Leseprobe2.pdf | https://www.theorie-musik.de/grundlagen/tonhoehen-und-tonnamen-notennamen/ ). <br />Der Mensch ist hierzu speziell Veranlagt<br />Das bedeutet also das der Mensch durch seine spezielle Physio-Psycologische Konstitution den Charakter der Oktavlage eines Klanges ( https://de.wikipedia.org/wiki/Frequenzgruppe | https://markus-fiedler.de/hoerphysiologie/ | http://www.allpsych.uni-giessen.de/thomas/teaching/pdf/Allg2008/04-hoeren.pdf | https://web.archive.org/web/20070313085642/http://ccat.sas.upenn.edu/music/music55/sept16.html ) unabh&auml;ngig von der Stellung des Klanges im Eigentlichen dieser Stufen ( den Chromatischen ) erfassen kann ( https://www.gutefrage.net/frage/was-ist-der-unterschied-zwischen-tonleiter-und-oktave | https://www.musiker-board.de/threads/tonumfang-einer-oktave-c-bis-c-oder-c-bis-b.453962/ | https://de.wikipedia.org/wiki/Stammton | http://www.medieval.org/emfaq/harmony/hex1.html#3 ).<br />Musik als das Spiel mit dem M&ouml;glichen und seiner Grenzen<br />Und gerade diese Besonderheit ist der Boden der Kunst der Musik. Musik ist das Spiel mit dem sich aus diesem Geflecht ergebenden M&ouml;glichkeiten ( https://de.m.wikipedia.org/wiki/Generalbass | https://ronaldkah.de/musik-komponieren/ | http://universal_lexikon.deacademic.com/242325/Generalbass%3A_Konzertierendes_Prinzip_und_Akkordaufbau ).</p>\n<p>Das Wesen der Harmonischen-Funktion<br />Als Besonderheiten dieses Geflechts ist also zu betrachten, dass ein Ton meist seine Harmonische Funktion mit seinem Tonnamen beh&auml;lt, und in fast jeder Oktave erscheinen kann ( https://mtheorie.wordpress.com/2014/11/29/intervalle-uber-oktave/ | https://de.wikibooks.org/wiki/Musiklehre:_Intervalle | https://www.electricbass.ch/lektionen/harmonielehre/11 ).<br />Spannung und Umdeutung in der Musik<br />Auch gibt es in der Musik die so genannte Umdeutung ( https://de.wikipedia.org/wiki/Modulation_(Musik) | https://musikanalyse.net/tutorials/modulation/ | https://de.wikipedia.org/wiki/Enharmonische_Verwechslung | https://www.textlog.de/2304.html ). Da wie eben gesagt auch die T&ouml;ne eines Akkords in jeder Lage der Oktave erscheinen kann &ndash; was man die Akkord-Umkehrungen nennt ( http://www.lehrklaenge.de/PHP/Akkorde/AkkordeUmkehrungen.php | https://www.meyer-gitarre.de/musiklehre/akkorde/umkehrungen/ | http://www.musikerprogramme.de/GEHB/Dreiklaenge.html ) &ndash; k&ouml;nnen zwei Akkorde mit unterschiedlicher Harmonischer Funktion in der Konkreta aus den selben Kl&auml;ngen bestehen ( https://de.wikipedia.org/wiki/Funktionstheorie | https://musikanalyse.net/tutorials/funktion-und-sequenz/ | https://www.gmth.de/zeitschrift/artikel/481.aspx ). <br />Die Modulation<br />Dies erst erm&ouml;glicht in der Musik die so genannte Modulation.<br />Probleme einer Aussage zur Endg&uuml;ltigkeit der Chromatischen-Skala<br />Nun Ergeben sich daraus folgend zwei Probleme.<br />1. Warum sollte es nicht m&ouml;glich sein zum Beispiel auch auf der Quarte ( und damit dem 3ten einfachen Verh&auml;ltnis eine Skala aufzubauen<br />2. K&ouml;nnte diesem 3ten anstelle des 1tem das 2ten gegen&uuml;ber treten.<br />3. Errichtung einer Skala Aufbauend auf andere Intervalle<br />Zu den Problemen<br />Erreichbarkeit aller Tonh&ouml;hen durch Quinte und Oktave</p>\n<p>Da alle Intervalle die wir ( aus der Chromatischen Skala ) kennen, schon durch das Reihen von Quinten &ndash; und zur&uuml;ck Oktavieren &ndash; erreichbar sind, folgt dass es keine Intervalle geben kann &ndash; die so nicht schon angestrebt werden k&ouml;nnen, und hier in Frage kommen ( https://www.theorie-musik.de/tonleiter/quintenzirkel/ | https://www.jupiter.info/de/wissen/profitipps/allgemeine-themen/musiktheorie-tonleitern-intervalle-quintenzirkel.html | https://de.wikibooks.org/wiki/Musiklehre:_Der_Quintenzirkel ).<br />Grenzen des H&ouml;rverm&ouml;gens<br />Im weiteren k&ouml;nnen wir sagen, dass das menschliche H&ouml;rverm&ouml;gen mit einer Frequenz-Aufl&ouml;sung von ca. 1/10 Halbton Musik erfassen kann ( https://de.wikipedia.org/wiki/Relatives_Geh%C3%B6r | http://www.thinkingapplied.com/sight-singing_folder/sight-singing.pdf | https://de.wikipedia.org/wiki/Absolutes_Geh%C3%B6r | https://www.spiegel.de/wissenschaft/mensch/ueberraschende-faehigkeit-absolutes-gehoer-auch-unter-nichtmusikern-verbreitet-a-574561.html ). Deshalb kann man wohl die kleinst stufige Skala als die 16tel Ton-Skala annehmen ( http://www.microtonal-synthesis.com/scales.html | http://www.microtonal-synthesis.com/scale_53tet.htm ).<br />Grenzen des H&ouml;rverm&ouml;gens bei Tieren und nicht Menschen</p>\n<p>Dies gilt leider aber nur f&uuml;r den Menschen. Tiere oder gar Physikalische Apparate k&ouml;nnen noch wesentlich exakter Aufl&ouml;sen ( https://www.tierchenwelt.de/specials/tierleben/428-grosse-lauscher-und-taube-nuesse.html | https://www.planet-wissen.de/natur/sinne/hoeren/pwiedasgehoerdertiere100.html | https://www.lernhelfer.de/schuelerlexikon/biologie/artikel/tonhoehe-und-lautstaerke ).<br />Die kleinst stufige Tonh&ouml;hen Skala<br />Wie gesagt, die unterste 16tel Ton-Skala kann f&uuml;r f&uuml;r Mensch und Tier als auch Physikalischen Gebilden die in unserer allt&auml;glichen Gr&ouml;&szlig;enordnung liegen, angenommen werden ( http://mu-sig.de/Theorie/pdf/Skalen.pdf | https://www.wikiwand.com/de/Mikrotonale_Musik | https://almanac.nma.bg/en/die-musik-des-20-jh-eine-herausforderung-fur-die-gehorbildung/ ).<br />Neue Harmonik neue Skalen<br />Probleme mit solchen Geh&ouml;ren Harmonikern <br />Die Kreiszahl k&ouml;nnte zu neuen Harmonien f&uuml;hren, nur m&uuml;sste es auch ein daf&uuml;r g&uuml;ltiges Geh&ouml;r geben. Wir kennen aber nur ein Prinzip f&uuml;r ein Frequenz aufl&ouml;sendes Geh&ouml;r, eben das Prinzip der Resonanz. Wichtiger als Werke mit solchen neuen Intervallen zu schaffen m&uuml;sste es deshalb sein Prinzipe solcher neuen Geh&ouml;re zu finden ( https://de.wikipedia.org/wiki/Frequenzz%C3%A4hler | https://de.wikipedia.org/wiki/Digitale_Messtechnik#Z%C3%A4hler | https://www.elektronikpraxis.vogel.de/signale-mit-einem-oszilloskop-analysieren-a-252618/ | https://www.netzfrequenzmessung.de/ ) .<br />Die Kunst f&uuml;hrt zu neuen M&ouml;glichkeiten<br />Prinzip der Kunst &ndash; Prinzip des Spiels das wir Kunst nennen ( https://stiftungbrandenburgertor.de/kunst-und-spiele/ | http://www.ulrichbaer.de/files/Methodenblaetter-Museumspaedagogik.pdf | http://www.kreativwerkstatt-karlstrasse.de/kinder.php ) &ndash; ist es aber dies praktisch vorwegzunehmen und durch die Akzeptanz neuer Intervalle praktisch return auf die M&ouml;glichkeit neuer H&ouml;rprinzipien zu schlie&szlig;en. <br />Das sch&ouml;ne in Kunst Musik Logik und Mathematik<br />Vielleicht ist es das Besondere das Sch&ouml;ne an der Mathematik und &ndash; damit &ndash; der Logik, dass fast alles sich aus einfachstem Aufbaut, und Schlussendlich zu Einfachstem Zur&uuml;ck f&uuml;hrt ( https://de.wikipedia.org/wiki/Axiomatische_Mengenlehre | https://www.lernhelfer.de/schuelerlexikon/mathematik/artikel/axiomatische-methode | https://www.lernhelfer.de/schuelerlexikon/mathematik/artikel/natuerliche-zahlen-axiomatischer-aufbau | https://www3.math.tu-berlin.de/Vorlesungen/WS10/LinAlg1/Materialien/folien_2010okt26.pdf ). Auch wenn dabei das Einfachste zum Erf&uuml;lltesten f&uuml;hrt ( https://de.wikipedia.org/wiki/Rasiermesser_%28Philosophie%29 | https://de.wikipedia.org/wiki/Ockhams_Rasiermesser | http://www.physics.adelaide.edu.au/~dkoks/Faq/General/occam.html ).<br />Das Wesen der Philosophie ( in diesem Sinne )<br />F&uuml;r die Philosophie hatte einst jemand gesagt: Die Philosophie ist wie ein Weg um das Geb&auml;ude des Wissens drum herum. Die Philosophie muss notwendig immer in ihre eigenen Anf&auml;ngen zur&uuml;ck kehren. ( https://de.wikipedia.org/wiki/Sofies_Welt | https://www.dtv.de/_files_media/title_pdf/leseprobe-62000.pdf | https://www.dtv.de/_files_media/downloads/unterrichtsmodell-sofies-welt-62000-56.pdf )<br />Was bringt die Philosophie</p>\n<p>Aufgabe der Philosophie ist auch nicht so sehr zu etwas wesentlich neuem zu f&uuml;hren ( https://www.philosophie-raum.de/index.php/Thread/28556-Einfache-Antworten-komplexe-Fragen/ | https://www.philosophie-raum.de/index.php/Thread/28567-Weis-der-Philosoph-wirklich-nichts-oder-nur-nichts-Endg%C3%BCltiges/ | https://www.philosophie-raum.de/index.php/Thread/28537-Philosophische-Fragen-mit-dem-Computer-beantworten/ ). Viel mehr liegt das Geb&auml;ude des Wissens schon vor. Aber es kommt f&uuml;r uns darauf an durch dieses Geb&auml;ude zu schlendern / zu flanieren und das Geb&auml;ude immer wieder aus neuen Blickwinkeln zu betrachten ( http://scienceblogs.de/arte-fakten/2010/05/16/warum-ist-philosophie-keine-wissenschaft/ | https://www.faz.net/aktuell/feuilleton/forschung-und-lehre/naturwissenschaft-und-philosophie-der-gestirnte-himmel-ueber-uns-12994386.html | https://www.philosophie.ch/philosophie/highlights/nachdenken-ueber-naturwissenschaften ). <br />Die Philosophie f&uuml;hrt zur Gesammt-Ansicht<br />Und somit auch zu einer immer vollkommenen Gesamtansicht zu kommen ( https://www.philosophie.ch/philosophie/grosse-fragen/wann-ist-philosophie-eine-wissenschaft | https://de.wikipedia.org/wiki/Wissenschaftstheorie | https://www.tau.ac.il/~agass/joseph-papers/shanker.pdf ).<br />Das Wesen der Proportionen<br />Anfang der Musik aus der Struktur der Proportionen</p>\n<p>Und in diesem Sinne kann das Wesen der einfachen Proportionen seit Pythagoras als der Anfang aller Musik betrachtet werden ( https://de.wikipedia.org/wiki/Pythagoras_in_der_Schmiede | http://www.genius.co.at/index.php?id=49 | https://www.researchgate.net/publication/263468307_Physik_Musik_mit_Pythagoras_fing's_an ).<br />Zeitgen&ouml;ssische Musik auf der Suche nach neuem</p>\n<p>Und doch wird die zeitgen&ouml;ssische Musik immer neue Verbindungen finden in denen die Bestandteile der Musik aufeinander verweisen k&ouml;nnen. Und gegenseitig definiert werden k&ouml;nnen. ( https://www.indiepedia.de/index.php?title=Neue_Musik | https://jazzpages.de/john-cage-interview-gefuehrt-von-hans-kumpf-1975-120905/ | http://www.karlheinzstockhausen.org/#german )<br />Sammlung unseres Ansatzes zu den Proportionen<br />2.) Da 1.) somit bei der uns bekannten Welt zu nichts weiterem f&uuml;hrt, wird auch dieser 2. Punkt zu nichts weiterem f&uuml;hren und wird damit als erledigt betrachtet ( https://www.goethe.de/de/kul/mus/gen/neu/20454982.html | https://www.tip-berlin.de/ultraschall-festival-2020-gibt-es-noch-neue-musik/ | https://www.goethe.de/de/kul/mus/21733234.html ).<br />Ein neuer Ansatz zur Theorie Harmonischer Schichten in der Musik</p>\n<p>Die M&ouml;glichkeit von Zwischen-Schichten<br />Bleibt aber die Frage k&ouml;nnte es nicht trotzdem n&uuml;tzlich sein unter der Oktave eine &ndash; so gesehene &ndash; Zwischenschicht zu begr&uuml;nden. ( http://www.mi.sanu.ac.rs/vismath/lene/ch3.htm | http://www.mi.sanu.ac.rs/vismath/lene/ch1.htm | http://www.aspm-samples.de/Samples2/pfleidep.pdf )<br />Im Harmonischen versus Intervall Sinn<br />Hier geht es und dies m&ouml;chte ich hier zeigen nicht um das H&ouml;ren mit anderer Resonanz / also Grund Intervallen. Sondern um deren Bedeutung im Harmonischen Sinne ( https://freimaurer-wiki.de/index.php/Harmonik | https://www.gmth.de/zeitschrift/artikel/447.aspx | https://de.m.wikipedia.org/wiki/Klangreihe ).<br />Bekannte Harmonische-Strukturen<br />Um das Prinzip solcher Zwischen-Stufen zu betrachten, sehen wir uns erst einmal allen westlichen Musikern bekannte Zwischen-Schicht an.<br />Die bekannten Tonarten<br />Hier lie&szlig;e sich nun sagen, dass wir diese Zwischen-Schicht als das Finden was wir die Tonarten ( Dur Moll Kirchentonarten ) nennen ( https://www.amusio.com/19311/dur-und-moll-wann-und-wie/ | http://www.koelnklavier.de/quellen/tonarten/moll.html | http://www.koelnklavier.de/quellen/tonarten/dur.html ).<br />Letztlich kann man die Kirchentonarten auf Dur und Moll mit verschobenem Ton-Zentrum zur&uuml;ck f&uuml;hren ( https://de.wikipedia.org/wiki/Kirchentonart | https://www.hochweber.ch/theorie/modes/Kirchentonarten-EGTA.pdf | http://www.mater-dolorosa-lankwitz.de/wiki/musik:ethos_der_kirchentoene ).<br />Aufgabe der Tonarten<br />Aufgabe der Tonarten ist es demnach eine feinere Gliederung der Ton-Stufen als der Oktave zu erm&ouml;glichen.<br />Die Bedeutung der Tonika<br />Dies wird hier meist erreicht, in dem diese Ordnung in besonderer Beziehung zu einem Tonalen Zentrum &ndash; der so genannten Tonika ( https://de.wikipedia.org/wiki/Tonika | https://musikwissenschaften.de/lexikon/t/tonika/ | http://www.musikzeit.de/theorie/kadenz.php?drucken=true ) errichtet wird.<br />Das Logische Problem der Tonarten<br />Im Grunde genommen kann gesagt werden, dass jede dieser Ordnungen eine L&ouml;sung eines Logischen Problems verstanden werden kann ( https://www.deutschlandfunkkultur.de/berechnete-klaenge.984.de.html?dram:article_id=153337 | http://www.paulhombach.de/sonifikation/ | http://www.beckmesser.de/komponisten/xenakis/portrait2011.html ).<br />Musik kann immer als L&ouml;sung eines Logischen Problems verstanden werden<br />&Uuml;berhaupt kann Kunst als Computer-Kunst immer als das Suchen nach einer L&ouml;sung &ndash; transkribiert in Noten &ndash; eines logischen Problems betrachtet werden ( https://www.cresc-biennale.de/download.php?itemID=2 | https://www.indiepedia.de/index.php?title=Algorithmische_Komposition | http://www.georghajdu.de/gh/fileadmin/material/articles/Computer_als_Inspiration.pdf ).<br />Musik als in Computer-Sprache formulierte L&ouml;sung eines Problems<br />Weiter oben habe ich schon gesagt, Musik in diesem Sinne ist immer die Interpretation einer Art Semiotischen-Netzes ( https://www.schillingersociety.com/ | https://www.indiepedia.de/index.php?title=Computermusik | http://www.computermusicjournal.org/ ) hin zu einer Zeitlichen Sequenz von Starts und Enden der T&ouml;ne als verwirklichte Frequenz Komponenten &ndash; was wir normalerweise einfach Musik nennen ( https://de.wikibooks.org/wiki/Musiklehre:_Was_ist_Musik%3F | http://sekundarschulvorbereitung.ch/contentLD/SV/SA36mMusik.pdf | https://definition-online.de/musik/ ).<br />M&ouml;gliche Formen der Niederschrift dieser Probleme<br />Nur braucht diese Art eines Semiotischen-Netzes nicht unbedingt das sein was der Name eigentlich bedeutet.<br />Zwei M&ouml;gliche Formen<br />So kann es zum einen als Beispiel eine Pr&auml;dikaten-Logische ( https://de.wikipedia.org/wiki/Pr%C3%A4dikatenlogik | https://www.erpelstolz.at/christian/skriptum/skriptum.pdf | https://de.wikiversity.org/wiki/Pr%C3%A4dikatenlogische_Formeln#Dreistellige_Pr%C3%A4dikate ) Beschreibung der angestrebten Musik sein. Dies w&auml;re ein Netz von Regeln ( und Meta-Regeln Regeln &uuml;ber Regeln und deren Anwendung ) die diese Musik beschreibt ( https://metaphor.ethz.ch/x/2019/hs/401-1511-00L/sc/Grundlagen.pdf | https://files.ifi.uzh.ch/rerg/amadeus/teaching/courses/formale_grundlagen_ss05/Praedikatenlogik.4.pdf | http://www.fb10.uni-bremen.de/khwagner/grundkurs2/kapitel4.aspx ).<br />Auf der anderen Seite k&ouml;nnte es sich auch um ein System des Lambda-Calculs handeln ( https://www.infosun.fim.uni-passau.de/cl/lehre/funcprog05/wasistfp.html | https://www.it-talents.de/blog/it-talents/was-ist-funktionale-programmierung-wann-setze-ich-sie-ein | https://de.wikipedia.org/wiki/Funktionale_Programmierung ). Also praktisch um eine Beschreibung wie direkt die Musik zu erschaffen w&auml;re.<br />Der Unterschied dieser Formen<br />Steht also beim ersten die Beschreibung des Resultats im Vordergrund so beschreibt dass andere den direkten Prozess der Generierung. <br />Die Prolog Form<br />F&uuml;r ersteres sei zum besseren Verst&auml;ndnis auf die Programmiersprache Prolog ( https://www.ps.uni-saarland.de/courses/seminar-ws03/LogischeProgrammierung.pdf | https://www.uni-trier.de/fileadmin/fb2/LDV/Naumann/prolog.pdf | https://www.swi-prolog.org/pldoc/doc_for?object=manual )</p>\n<p>Die Lisp / OpenMusic Form<br />verwiesen f&uuml;r letzteres auf Lisp ( https://matthias.benkard.de/lisp/introductio.de.html | https://de.quora.com/Was-ist-so-toll-an-Lisp | http://www.softwarepreservation.org/projects/LISP/book/LISP%201.5%20Programmers%20Manual.pdf ) und seine Umsetzung in Graphik als OpenMusic ( https://en.wikipedia.org/wiki/OpenMusic | http://repmus.ircam.fr/openmusic/home | https://openmusic-project.github.io/libraries.html ).<br />M&ouml;gliche Aspekte f&uuml;r die Kunst der Musik aus beiden Formen<br />Erstere Kunst geht dann von dem Ansatz der Erf&uuml;llung einer Form aus ( http://www.rene-finn.de/Referate/formenlehre_im_unterricht.html | https://de.wikipedia.org/wiki/Formenlehre_(Musik) | https://de.wikipedia.org/wiki/Gattung_(Musik) ). Der anderer Weg besteht im Flie&szlig;en lassen eines Daten-Streams. <br />Besonders des Spiels von Frage und Antwort in der Improvisation<br />Des Dialogs des Computers mit Seinesgleichen oder dem Betrachter. Und dem ewigen Spiel von Frage und Antwort, wo die Antwort wieder zur n&auml;chsten Frage wird ( https://www.lernhelfer.de/schuelerlexikon/musik/artikel/improvisation | http://michael-michaelis.de/index.php/improvisation | https://www.offeneohren.org/de/improvisationsmusik_zum_einsteigen.htm ).<br />Das Improvisieren an sich<br />Gerade aber im Spiel mit diesem Dialog liegt die Basis allen Improvisieren.<br />Das Improvisieren aber ist auch der Moment mit dem der &bdquo;Anwender&ldquo; aka ( auch bekannt als ) H&ouml;rer zu seinem eigenen speziellen St&uuml;ck Musik kommt ( http://kurtluescher.de/downloads/KL_Musikalisches_Improvisieren.pdf | https://www.wolke-verlag.de/wp-content/uploads/2018/06/Dieter-Nanz-Aspekte-der-freien-Improvisation.pdf | https://opus4.kobv.de/opus4-udk/frontdoor/deliver/index/docId/14/file/Meyer_Karl_2.pdf ). <br />Der Dialog mit dem Betrachter in der Improvisation<br />Er wird sich dabei zu einem Dialog mit dem Computer als sein Gegen&uuml;ber in diesem Spiel begeben. Und in diesem seine Vorstellung von der gew&uuml;nschten Musik Erkunden.<br />Verschiedene Formen der Form der Musik<br />Doch kommen wir jetzt einmal zu einem besonderen Unterschied der Gattungen der Musik. Wir m&uuml;ssen die Form Musik von der der Absoluten Musik trennen. ( http://www.musikurlaub.com/lexikon/Absolute-Musik.html | https://www.gutefrage.net/frage/was-ist-die-absolute-musik-und-kann-jmd-mir-ein-beipiel-nennen | https://de.wikipedia.org/wiki/Absolute_Musik )<br />Die Frage der Absoluten Musik<br />Da eine Form immer von &auml;u&szlig;erem ausgeht, kann nur im Letzteren die Verwirklichung der &bdquo;Absoluten Musik&ldquo; liegen. Doch auf dieses werde ich sp&auml;ter noch einmal zu sprechen kommen.<br />Ein Fragment eines Versuches zu Definition von &bdquo;absoluter Musik&ldquo;<br />Absolute Musik meint hier Musik die ganz bei sich selber bleibt ( http://www.hifi-forum.de/viewthread-68-2281.html | https://de.wikipedia.org/wiki/Minimal_Music | http://www.minimal-music.com/sub/de/musik/idee/index.php ). Die also nicht von &auml;u&szlig;eren Momenten gepr&auml;gt oder abgeleitet ist.<br />Das Problem &bdquo;absoluter Musik&ldquo; im Kontext der Intermedialit&auml;t<br />Dies f&uuml;hrt aber zu der Frage, in wie weit es sich um Absolute Kunst handeln kann, wenn der Gedanke des Kunstwerkes von einer Kunst-Disziplin in eine andere Transponiert wird ( http://www.denhoff.de/musikzubildern.htm | https://pixabay.com/de/photos/musik-skulptur-musikthema-schl%C3%BCssel-1587309/ | https://freiekulturkommune.de/ ). <br />Kl&auml;rung ???<br />Es w&auml;re davon auszugehen, dass es sich dann nicht mehr um Absolute Kunst handelt. Auf der anderen Seite w&auml;re dann dieser Quell-Gedanke etwas den Kunst-Disziplinen &uuml;bergeordnetes ( https://www.gluecklich.info/kunst.htm | http://mozartcultures.com/de/heidegger-und-kunst-auf-das-wesen-der-kunst/ | http://www.lexikus.de/bibliothek/Geschichte-der-Magie-01/037-Das-Wesen-der-Kunst ). Was also erm&ouml;glichen k&ouml;nnte, dass es sich trotzdem noch um Absolute Kunst handeln k&ouml;nnte.<br />Was k&ouml;nnen wir von dieser Zwischenschicht lernen<br />&Auml;hnlich der Proportionen der Oktave geht es darum Zwei kontr&auml;re Ordnugen zu schaffen, die durch &Auml;nderung der Position der Tonika in der Ordnung selber ausseinander Abgeleitet werden k&ouml;nnen ( https://www.wissen.de/lexikon/kadenz-musik | https://www.gutefrage.net/frage/was-ist-eine-kadenz-musik | https://de.wikibooks.org/wiki/Musiklehre:_Dur-Kadenzen ).<br />&Uuml;bertragbares ?!?<br />Dabei sollte diese Verschiebung weder in der Gr&ouml;&szlig;e der Oktave &ndash; was aus Tautologischen Gr&uuml;nden entf&auml;llt &ndash; noch auf Basis der Quinte &ndash; was einfach zu Grob w&auml;re &ndash; erfolgen. Damit w&uuml;rde sich als noch einfach genug der kleine und gro&szlig;e Terz anbieten ( https://de.wikipedia.org/wiki/Terz_%28Musik%29 | https://musikwissenschaften.de/lexikon/t/terz/ | https://www.gutefrage.net/frage/was-ist-eine-grosse-und-was-ist-eine-kleine-terz ).<br />Die Teilung in der Oktave<br />Teilt man nun aber die Oktave in solche Terz auf, so erh&auml;lt man ungef&auml;hr vier Terz in der Oktave. Die mittlere und letzte Terz entf&auml;llt &ndash; eben wegen der Tautologie ihrer Bedeutung als Quint und Oktave. <br />Die Harmonischen Neben-Funktionen<br />Was &uuml;brig bleibt sind dem nach 3 Systeme. Das sind das Ausgangs System und die Verschiebung der Tonika eine Terz nach oben oder unten. <br />Die Moll-Parallelen Funktionen <br />Das nach unten verschobene System nun nennen wir die Moll-Parallel ( https://de.m.wikipedia.org/wiki/Paralleltonart | http://universal_lexikon.deacademic.com/231067/Dur-Moll-System | https://de.wikipedia.org/wiki/Die_Sprache_der_Tonart ) zum 1. System <br />Die Gegen-Parallelen Funktionen<br />Das nach oben verschobene System nennen wir das Gegen-System ( https://de.wikipedia.org/wiki/Gegenklang | http://www.mu-sig.de/Theorie/Tonsatz/Tonsatz04.htm | http://www.mu-sig.de/Theorie/Tonsatz/Tonsatz00.htm ).<br />Asymmetrie bereichert die Harmonik<br />Vielleicht sollte man an dieser Stelle schon sagen, dass eine gewisse Asymmetrie Grundlage jeder Kunst ist ( http://de.nextews.com/22f813c6/ | https://www.lernhelfer.de/schuelerlexikon/kunst/artikel/ordnungsprinzipien-des-bildaufbaus | http://www.joachimschummer.net/papers/2006_Symmetrie_Krohn.pdf ). <br />Das K&uuml;nstlerische Spiel mit der Harmonik<br />Kunst so werde ich sp&auml;ter erkl&auml;ren beginnt immer mit dem Spiel mit dem M&ouml;glichen. Und Asymmetrie ist der Zucker der dieses Spiel uns heiter macht. Nat&uuml;rlich und dies werde ich sp&auml;ter auch noch erkl&auml;ren, muss der Betrachter in einer Verfassung sein, sich diesem Spiel auch hinzugeben ( https://www.kunsthalle-karlsruhe.de/vermittlung__trashed/vermittlungskonzept/ | https://www.boesner.com/kunstportal/buchtipp/kunst-unterrichten/ | https://www.ankevonheyl.de/was-ist-kunstvermittlung/ ).<br />Kandinsky &uuml;ber die Regeln des Spiels mit den Parametern in der Kunst<br />Aber auch Kandinsky ( https://geboren.am/person/wassily-kandinsky | https://de.wikipedia.org/wiki/Wassily_Kandinsky | http://www.kandinskywassily.de/werk-1.php ) hat es gesagt, es geht nicht um die Pole des M&ouml;glichen, oder der Mitte dessen. Ja es geht nicht einmal um diese Mitte. Es geht um den Bereich zwischen all diesen Punkten, und um die Frage, was diese Bereiche dazwischen als Ph&auml;nomene mit dem Betrachter machen.<br />Ist der Betrachter f&uuml;r dieses Spiel bereit<br />Nur eben der Betrachter muss dazu bereit sein ( https://www.zeit.de/2012/17/Museumbesuch-Studie | https://www.gutefrage.net/frage/wirkung-auf-den-betrachter-kunst | https://www.scinexx.de/businessnews/wie-kunst-die-psyche-beeinflusst/ ).<br />Kandinskys Schrift &bdquo;Punkt zu Linie zu Fl&auml;che&ldquo;<br />Im &Uuml;brigen hat Kandinsky als erster eine &ndash; so k&ouml;nnte man sagen &ndash; Theorie der Abstrakten Bildgebung vorgelegt mit seiner Schrift &bdquo;Punkt zu Linie zu Fl&auml;che&ldquo; ( http://absolut-basics.com/node/57 | https://www.bauhaus100.de/das-bauhaus/lehre/unterricht/unterricht-wassily-kandinsky/ | https://archive.org/stream/punktun00kand/punktun00kand_djvu.txt ).<br />Punk in der Kunst / Subversive Kunst<br />Und Kunst muss immer auch &bdquo;Subversive&ldquo; sein. Da immer die Aufgabe der eigentlichen Kunst &ndash; das was ich in meinen Texten die &bdquo;Neue Kunst&ldquo; genannt habe &ndash; darin besteht, den Betrachter neugierig auf Bereiche zu machen, die er sonst nicht ber&uuml;hren w&uuml;rde ( https://www.waz.de/staedte/muelheim/ruhr-gallery-zeigt-demokratische-kunst-fuer-querdenker-id227593385.html | https://www.lokalkompass.de/bochum/imagepost/kunst-im-sinne-des-betrachters_i402525 | https://www.art-in.de/ausstellung.php?id=6176 ).<br />Wer nun weiteres zur &bdquo;Neuen Kunst&ldquo; der &bdquo;Subversiven Kunst&ldquo; erfahren m&ouml;chte. Sei auf das Lexikon verwiesen.<br />Die Terz macht den Unterschied zwischen Moll und Dur<br />Vielleicht ist auch der Unterschied zwischen Moll und Gegen-Skala der Unterschied zwischen gro&szlig;en kleinen und &Uuml;berm&auml;&szlig;igen Terz ( https://www.helpster.de/dreiklaenge-leicht-erklaert-so-verstehen-sie-musik_84455 | http://www.musikzeit.de/theorie/dreiklang.php | https://www.theorie-musik.de/intervalle/ ).<br />Daraus ergibt sich nun also die Frage:<br />K&ouml;nnte man nicht feiner Zwischen Unterteilen. Im alten System ergab die Quint rauf und runter zur Tonika die Dominante und die Subdominante ( https://musikwissenschaften.de/lexikon/d/dominante/ | https://de.m.wikipedia.org/wiki/Dominante | https://de.wikipedia.org/wiki/Dominante ). <br />Die n&ouml;tige &Uuml;bertragung<br />Nat&uuml;rlich k&ouml;nnte man was Oktave ist der Quint &uuml;bereignen und was Quint war der Terz. Somit w&auml;re es m&ouml;glich weitere Systeme auf dem Intervall der Sekunde zu errichten ( https://de.m.wikipedia.org/wiki/Sekunde_(Musik) | http://dictionary.sensagent.com/Sekunde%20(Musik)/de-de/ | https://klangsteine.com/blog/die-ordnungen-der-musik/ ). <br />Und Was man daraus erh&auml;lt</p>\n<p>Nun k&ouml;nnte man diesige dann minorDominante Funktionen benennen. Und niemand w&uuml;rde uns daran hindern dies nun mit immer kleineren Intervallen geschehen zu lassen.<br />Kandinskys Theorie des Punktes versus Harmonische Unter-Schichten<br />Erinnert sei hier an das was Kandinsky in seiner Schrift &bdquo;Punkt zu Linie zu Fl&auml;che&ldquo; &uuml;ber das Wesen des Punktes gesagt hat ( http://phaenomenologica.de/wp-content/uploads/2017/09/Kandinsky_Punkt_cinq.pdf | https://www.bauhaus-bookshelf.org/bauhausbuecher-9-wassily-kandinsky-punkt-und-linie-zur-flaeche-pdf-1926.html ). Der Punkt ist ein Unterelement des Bildes der selber so zu sagen ein Bild im Bild darstellt.<br />&Uuml;berladung in Kunst und Musik<br />Man kann nun aber ein Bild mit immer weiteren unter Bildern erweitern. Was man aber auch aus der Kunst kennt ist das &Uuml;berladene Bild .<br />Das richtige Ma&szlig; f&uuml;r den Inhalt eines Kunstwerkes<br />Ein Bild sollte immer das darstellen was es darstellen soll ( https://zeichnen-lernen.net/gestalten/bildanalyse-254.html | https://www.helpster.de/bildanalyse-in-der-kunst-so-fuehren-sie-sie-durch_67147 | http://www.abipedia.de/bildanalyse.php ). &Auml;hnlich wie bei der Logik. <br />Inhalt in Logik versus Kunst<br />So k&ouml;nnte man auch sagen die Analoge Logik ist das Bild, und das Digitale Bild ist die Logik ( https://uni-24.de/digitale-analoge-kommunikation-beispiel-aus-dem-alltag-tz24/ | https://www.gutefrage.net/frage/hilfe-analog-digital-philosophie | https://de.wikipedia.org/wiki/Digitalit%C3%A4t ). <br />Das Bilder-Verbot in der Religion<br />Und es sei daran erinnert, dass den ersten Juden &ndash; und damit auch den Christen und dem Islam &ndash; von Gott ein Bilder-Verbot ( https://de.wikipedia.org/wiki/Bilderverbot | https://web.archive.org/web/20070612113609/http://www.nzz.ch/2006/02/16/fe/articleDKUC5.html | http://www.bu.edu/mzank/tr-deutsch/archiv/Bilderverbot.html ) auferlegt war. Es k&ouml;nnte sein, dass eine gewisse Angst vor dem Bild geherrscht haben mag. Dies weil das Bild so viel / zu viel versprach &uuml;ber die Wirklichkeit darzustellen, dass es wie eine Voodoo-Puppe Macht &uuml;ber das Dargestellte versprach ( http://www.terra-human.de/kulturelles/hoehlenmalerei.htm | https://www.praehistorische-archaeologie.de/thema/hoehlenmalerei/jagd-und-fruchtbarkeitszauber/ | https://www.seilnacht.com/Lexikon/Hoehlen.htm ), hier sei im weiteren an die Antike H&ouml;llenmalerei erinnert.<br />Die Frage nach dem rechten Inhalt des Kunstwerkes<br />Die Findung einer Idee f&uuml;r ein Kunstwerk<br />Wir m&uuml;ssen uns bevor wir ein Bild ein Kunstwerk erschaffen uns Fragen: Was wollen wir mit dem Bild darstellen. Wenn man nicht malt was man darstellen will schafft man eine Stumme Sprache, sagt man dagegen zu viel beginnt man die S&uuml;nde der Geschw&auml;tzigkeit. So ist es eben auch mit der Musik ( https://www.ideenfindung.de/%C3%9Cbersicht-Liste-Kreativitaetstechniken-Ideenfindung.html | https://www.uni-bielefeld.de/erziehungswissenschaft//scs/pdf/leitfaeden/studierende/themenfindung.pdf | https://www.martin-missfeldt.de/bewerbungsmappe-kunst-tipps-mappenberatung.php ).<br />Das Problem des fehlenden Inhalts<br />Und doch besteht bei mancher Unterhaltungsmusik der Sinn des darzustellenden in der Geschw&auml;tzigkeit selber. Was vielleicht in der Unterhaltungsmusik ein m&ouml;glicher Fall ist, zu Bedenken aber in der Kunst-Musik f&uuml;hren sollte ( https://www.focus.de/panorama/boulevard/pop-es-hat-so-schoen-geprickelt-_aid_209238.html | https://mltysk.wordpress.com/2012/03/09/unterhaltungs-und-populare-musik-in-deutschland/ | ). Verwiesen sei hier an das was ich noch zum Techno und dem Trance ausf&uuml;hren werde.<br />Zur neuen Harmonischen-Schicht<br />Sekunde statt Terz<br />So wird die Sekunde in ihren Unter-Arten die asymmetrische Unterteilung der Terz. Und in diesem Rahmen gehe ich davon aus, das was &uuml;ber die Kadenz ( http://dictionary.sensagent.com/Kadenz%20(Harmonielehre)/de-de/ | https://www.musiker-board.de/threads/ii-v-i-kadenz-unter-verwendung-von-substitutdominaten.401946/ | http://www.musiker-knowhow.de/709-kadenzen-bilden-und-erkennen.html ) und deren neben Funktionen &ndash; zum Beispiel der Subdominant parallelen ( https://de.wikipedia.org/wiki/Funktionstheorie | https://musikanalyse.net/tutorials/funktion-und-sequenz/ | https://de.wikipedia.org/wiki/Pr%C3%A4dominante ) &ndash; gesagte auch in dieser Weise auf die feineren Unterteilungen &uuml;bertragen werden kann.<br />Versuch der &Uuml;bertragung der Kadenz auf diese neue Schicht<br />Den Versuch hier zu beschreiben was es mit dieser Kadenz auf sich hat m&ouml;chte ich gerne als gescheitert betrachten bevor ich ihn beginne. ( https://gehoerbildung-musiktheorie.de/kadenz/ | https://www.gutefrage.net/frage/musiktheorie-doppeldominante-sequenzierung-kadenzen )<br />&Uuml;ber Schl&uuml;sse und Modulation<br />Doch l&auml;sst sich folgendes sagen: Das n&auml;mlich auch auf diesen neuen Ebenen es wieder um Schl&uuml;sse und Modulationen geht ( https://gehoerbildung-musiktheorie.de/schluswendungen/ | https://musikanalyse.net/tutorials/kadenz-als-Formmodell/ | https://de.wikipedia.org/wiki/Halbschluss ).<br />Die kleineren Harmonischen Funktionen<br />Nur, dass es n&auml;mlich um Schl&uuml;sse immer kleineren Sinn-Einheiten gibt. Die Grammatik erweitert sich somit in einer Weise, wie dieser Text erweitert ist um Gedanken-Striche und S&auml;tze in Klammern ( https://www.lernhelfer.de/schuelerlexikon/musik/artikel/motiv-und-thema | https://de.wikipedia.org/wiki/Phrasierung | https://gitarrenboard.de/showthread.php?tid=30998 ).<br />Mehrere Schichten ein Werk<br />Nur scheint es dass die Haupt-Funktionen und deren Modulationen zueinander das Ger&uuml;st eines Werkes ergeben. W&auml;hrend Neben-Funktionen eher die Gliederung einer musikalischen Phrase ergeben ( https://www.lernhelfer.de/schuelerlexikon/musik/artikel/konzert | https://www.lernhelfer.de/schuelerlexikon/musik/artikel/concerto-grosso-zwischen-suite-und-ritornellform | https://www.theorie-musik.de/musikformen/musikformen/die-fuge/ ).<br />Modulationen &uuml;ber verschiedene Schichten<br />&Uuml;berhaupt scheint es so zu sein, dass die Modulation der Weg von den &auml;u&szlig;eren Schalen der Zwiebel des musikalischen Werkes &ndash; oder der Russischen Puppe &ndash; hinein in die immer feineren Harmonischen Ebenen darstellen ( https://deacademic.com/dic.nsf/dewiki/968312 | https://de.qwe.wiki/wiki/Modulation_(music) | https://web.archive.org/web/20040502113601/http://www.smu.edu/totw/modulate.htm ).</p>\n<p>Spannung durch Modulation<br />Spannung wie sie eben durch solche Modulationen durch die M&ouml;glichkeit der Umdeutung der Konkreta von Kl&auml;ngen in der Abstrakta ihrer Harmonischen Funktion entstehen sind der Treibstoff der Musik ( https://de.wikipedia.org/wiki/Spannungston | https://web.archive.org/web/20160304080056/http://www.marcus-baader.de/pdf/spannung_oder_entspannung.pdf | https://de.wikipedia.org/wiki/Upper_Structure ).<br />Musik muss sich in der Zeit Ereignen<br />Gibt es &Uuml;berhaupt Musik ohne Zeit<br />Musik die bei einem station&auml;ren Klang verbleibt ist in diesem Sinne keine Musik. Zum Gl&uuml;ck gibt es keinen Station&auml;ren Klang ( https://de.wikipedia.org/wiki/Klangfarbe | https://freie-referate.de/musik/klangfarben | https://www.musiklexikon.ac.at/ml/musik_K/Klangfarbenmelodie.xml ). <br />Der nicht Station&auml;re Klang nat&uuml;rlicher Instrumente<br />Zum einen selbst der Klang eines Instruments besteht immer im zeitlichen Verlauf. So entsteht der Klang entfaltet sich und vergeht wieder. &Auml;hnliches gilt f&uuml;r das ganze Musik-St&uuml;ck. Somit unterst&uuml;tzt diese Spannung die Natur des ganzen St&uuml;ckes.<br />&Uuml;ber die Generelle Unm&ouml;glichkeit Zeitloser Musik<br />Ja Musik im Moment erfassen zu wollen ist wie Gehen im Stehen erfahren zu wollen. Da Musik in gewisser Weise Zeitliche Struktur ist, kann sie auch nur in der Zeit erfasst werden. Wollte man sie in Augenblicken erfassen, m&uuml;sste sie in zusammenhanglose Momente zerfallen.</p>\n<p>Neuem Harmonik zum Dritten<br />Nur w&auml;re damit zu rechnen, dass diese Harmonischen Funktionen immer komplexer werden, ja das sie sich dem H&ouml;rer entw&ouml;hnen / entziehen und zu immer Pervertierteren Harmonien f&uuml;hren ( https://cinemusic.de/2003-2494-im-dritten-reich-verboten-entartete-musik-folge-3/ | https://www.dw.com/de/verbotene-kl%C3%A4nge-im-ns-staat/a-16834460 | https://www.duemling.de/entartete-musik/ ).<br />Die Notwendigkeit der Vermittlung von Werken dieser Neuen-Harmonik<br />Ein Haupt-Problem der modernen Kunst, ist deren Vermittlung zu klein Hermann auf der Stra&szlig;e. Wir d&uuml;rfen nicht davon ausgehen, dass dieser Herr im Alltag noch die Zeit hat ( https://www.musiker-board.de/threads/moderne-klassik-zum-einstieg.384689/ | https://www.zeit.de/2010/01/Interview-Rosa | https://www.kubi-online.de/artikel/plaedoyer-musse-gedanken-einem-kontemplativen-musikunterricht ) sich mit komplexen Theorien im Kontext der Betrachtung des Kunstwerkes zu besch&auml;ftigen ( https://uol.de/musik/lehre/angewandte-musiktheorie-und-komposition | http://www.miz.org/static_de/themenportale/einfuehrungstexte_pdf/05_NeueMusik/fricke_aesthetiken.pdf | http://www.miz.org/static_de/themenportale/einfuehrungstexte_pdf/05_NeueMusik/fricke_strukturen.pdf ). <br />ist eine Unterteilung der Kunst in Unterhaltung und ernst gemeinter Kunst Sinnvoll.<br />Deshalb kann eine Unterteilung der Musik in Unterhaltungs- und so genannter Ernsten-Musik angebracht sein.<br />Muss U-Musik platt sein<br />Unterhaltungs-Musik braucht dabei gar nicht sich in platten Ans&auml;tzen zu konstituieren. Auch wenn ich oben schon &uuml;ber die Geschw&auml;tzigkeit moderner Unterhaltungs-Musik gesprochen habe. <br />Einfachere Harmonik in der U-Musik<br />Ein einfaches Harmonisches System &ndash; und so zum Beispiel die Skala der Pentatonik ( https://de.wikipedia.org/wiki/Pentatonik | https://www.bonedo.de/artikel/einzelansicht/skalen-workout-pentatonic-scale-1.html | https://www.stringworks.ch/grundlagen/theorie/die-pentatonik/ ) &ndash; k&ouml;nnen ein Werk der Musik erh&ouml;htem Erfolg als Unterhaltung erm&ouml;glichen ( https://www.guitar.de/lesen/news/pentatonik-solo-lernen-noten/ | https://supportnet.de/fresh/2006/6/id1357123.asp | https://de.wikibooks.org/wiki/Gitarre:_Die_Dur-Pentatonik ). <br />Techno / Trance als &bdquo;Absolute Musik&ldquo; mit Unterhaltungs-Wert <br />Sei hier noch darauf verwiesen dass Techno oder Speziell Trance ( https://de.wikipedia.org/wiki/Techno | https://commons.wikimedia.org/wiki/Category:Techno?uselang=de | https://de.wikipedia.org/wiki/Trance_(Musik) ) eine Musik praktisch ohne Botschaft ist, oder im besonderen Sinne die Botschaft der reinen Unterhaltung hat.<br />Das Ph&auml;nomen der Loveparade <br />Diese besondere Botschaft ist dann wohl auch die Botschaft der Loveparade ( https://de.wikipedia.org/wiki/Loveparade | https://www.faz.net/aktuell/feuilleton/interview-juergen-laarmann-die-wut-auf-berlin-ist-verstaendlich-11276978.html | https://commons.wikimedia.org/wiki/Category:Loveparade?uselang=de ). Einer Demonstration &ndash; Rechtlich als genau das Angemeldet &ndash; der Guten Laune.</p>\n<p>Der Ernst zerst&ouml;rt das Kunstwerk<br />Ernste-Musik also so genannte Kunst-Musik ist schon eher ein Widerspruch in sich selber. Kunst die Ernst sein soll muss versanden.<br />Kunst ist vielmehr ein Spiel<br />Kunst ist viel mehr das Spiel mit dem M&ouml;glichen ( http://kunst-und-spiele.de/ | https://tageswoche.ch/kultur/kunst-als-spiel-oder-spiel-als-kunst/ | https://www.zeit.de/1947/23/die-kunst-des-moeglichen ). Es kommt in der Kunst &ndash; meiner Meinung nach &ndash; vor allem darauf an mit den Grenzen des M&ouml;glichen zu spielen.<br />&Uuml;ber die Umst&auml;nde der Produktion von Kunst<br />Dies hat bisher dazu gef&uuml;hrt, dass auch bei der elektroakustischen Kunst praktisch zwischen Komponisten ( https://icem.folkwang-uni.de/~ludi/aussermusikD.html | https://www.tamino-klassikforum.at/index.php?thread/13570-schl%C3%BCsselwerke-elektroakustischer-musik/ | https://www.researchgate.net/publication/319963314_Wandlungen_der_elektroakustischen_Musik ) und Interpreten ( https://www2.ak.tu-berlin.de/~fhein/Alias/Geschichte/themen/Machlitt-AdK.html | https://www.adk.de/de/akademie/e-studio/index.htm | https://www1.wdr.de/radio/wdr3/programm/sendungen/wdr3-open-sounds/open-sounds-140.html ) &ndash; mit leichten Modifikationen &ndash; als dem K&uuml;nstler und dem Techniker eine Arbeits-Teilung erfolgt.<br />Die Aufgabe von Komponist / K&uuml;nstler und Toningeneur / Techniker<br />Der eine k&uuml;mmert sich um die Form der Musik der andere um deren Verwirklichung. <br />Neue Aufgaben in der &bdquo;Neuen Kunst&ldquo;<br />In der &bdquo;Neuen Musik&ldquo; kommt es zu einer Verlagerung der Aufgaben. Der K&uuml;nstler erledigt zum einen die Vorarbeit der k&uuml;nstlerischen M&ouml;glichkeiten ( http://www.kirstenreese.de/texte/ReeseGeschlechtsloseElektronischeMusik.pdf | https://de.wikipedia.org/wiki/Live-Elektronik | https://en.wikipedia.org/wiki/Live_coding ) und deren Erm&ouml;glichung in der Technik. <br />Erweiterung der Aufgaben<br />Er kommt im weiteren als der ins Spiel der dem Betrachter den Weg zu seinem unikativen St&uuml;ck multipler Kunst er&ouml;ffnet. Ja er sollte das Programm / das System zur Gewinnung dieses Kunstwerkes so gestalten ( https://www.indiepedia.de/index.php?title=Live-Coding | https://www.indiepedia.de/index.php?title=Pure_data | https://www.indiepedia.de/index.php?title=SuperCollider ), dass ein ungelernter oder als mehr oder weniger vorbereiteter Betrachter mit dem Spiel beginnen kann ( https://toplap.org/ | https://developer.ibm.com/callforcode/blogs/use-node-red-and-ai-to-analyze-social-media-after-a-disaster/ | https://cdm.link/2019/02/live-coding-group-toplap-celebrates-days-of-live-streaming-events/ ). <br />Ein neuer Titel f&uuml;r diese Berufung<br />Diese neue Aufgabe sollte den Titel Philosophisch-Technischer-Assistent erhalten. Was leider noch kein amtlicher Ausbildungsberuf ist.<br />Das Internet Radio von CreCo<br />Nicht ohne Grund habe ich mein kleines Internet-Radio auch mit der Aussage beschrieben ( https://laut.fm/electronic_art_music | https://laut.fm/ | https://de-de.facebook.com/laut.fm ). Das Kunst Spiel ist, und dieses Radio als mein Spiel mit den M&ouml;glichkeiten des Internets 3.x zu betrachten sei.<br />Gr&uuml;nde f&uuml;r diese K&uuml;nstlerische Intervention<br />Bei diesem Radio ging es mir um eine Alternative zum Internet-Radio der DeGm ( https://www.degem.de/ | https://www.degem.de/info/ | https://de.wikipedia.org/wiki/Deutsche_Gesellschaft_f%C3%BCr_Elektroakustische_Musik ). Dieses Radio wurde leider nie aktuell bzw. abwechslungsreich gehalten. Weswegen ich diese K&uuml;nstlerische Intervention f&uuml;r sinnvoll gehalten hatte.<br />Verweis auf das &bdquo;Subversive Lexikon&ldquo; von CreCo<br />Nun kann ich nicht noch n&auml;her auf die Spiel-barkeit der Kunst eingehen. Der geneigte Leser m&ouml;ge dazu anderen Texten meines so genannten &bdquo;Subversiven Lexikons&ldquo; folgen. Auch dieses Lexikon ist ein Spiel mit den Darstellungs-M&ouml;glichkeiten des Internets und in diesem Sinne Kunst ( https://de.wikipedia.org/wiki/Netzkunst | https://web.archive.org/web/20041204184445/http://www.heise.de/tp/r4/magazin/nk/ | http://www.textportrait.de/ ).<br />Noch einmal Mikrotonale-Musik ( Abschluss )<br />Welche Mikrotonale Musik ich meine und Ablehne<br />Wenn ich nun also davon h&ouml;re ob nicht die Mikrotonale Musik ihre Berechtigung hat, so gebe ich hiermit meinen Ausgangspunkt zur Begr&uuml;ndung von Mikrotonalen Skalen an ( https://www.musiker-board.de/threads/mikrotonale-musik-diverses.358403/ | https://www.sequencer.de/synthesizer/threads/mikrotonale-musik-andere-tonskalen.44057/ | https://www.gutefrage.net/frage/charakteristiken-von-skalen--diatoniken--pentatoniken ).<br />Mikrotonalit&auml;t ist nicht Musik mit verstimmten Instrumernten<br />Mikrotonale Skalen hingegen auf dem Pythagoreischen Kommata ( https://kilchb.de/2018.php | https://de.wikipedia.org/wiki/Pythagoreisches_Komma | http://www.math.uni-bremen.de/didaktik/ma/ralbers/Materialien/Vortragsmat/PythagKomma.pdf ) zu begr&uuml;nden lehne ich dagegen ab. W&uuml;rde es so sein, dass dieses Kommata der Ausgangs-Punkt ist, so m&uuml;sste auch jede einfache Verstimmung eines Instruments zu immer komplexeren Harmonien f&uuml;hren.<br />Mikrotonalit&auml;t entsteht aus feinerer Harmonik <br />Das nun aber, Mikrotonale Musik zu immer komplexeren Harmonien f&uuml;hrt, aber nicht aus dem Sinn solcher Inexaktheiten, sondern aus dem Grund immer feinerer Semiotisch gest&uuml;tzter Unterteilungen ( http://www.semiotik.eu/Semiotik-und-Grundlagen-der-Wissenschaft.o326.html | http://www.sfs.uni-tuebingen.de/~gjaeger/lehre/ws0607/grundkurs/folien1.pdf | https://www.uni-frankfurt.de/59466337/Seminar-Skript-Einfuehrung-in-die-Sprachwissenschaft-I.pdf ), m&ouml;chte ich an dieser Stelle gar nicht leugnen. Was es nun mit dieser Semiotik auf sich hat, m&ouml;chte ich gerade in diesem Text erkl&auml;ren.<br />Das Mathematische Modell der Musik als Kunstwerk<br />Verstimmungen auch im besten Nummerischen-Modell<br />Jedes Instrument aber, selbst Elektronische-Musik berechnet auf dem Computer, und somit ann&auml;hernd als reines Mathematisches Modell einer absoluten Musik ( https://www.scinexx.de/news/technik/geometrie-macht-musik-zum-ohrwurm/ | https://de.wikipedia.org/wiki/Mathematisches_Modell | https://www.heise.de/newsticker/meldung/Wissenschafts-KI-zum-Download-877627.html ). Kann dies eben nur Ann&auml;hernd sein. Es kommt auch hier immer zu Inexaktheiten. z.B. Rundungsfehlern.<br />Musik aus dem Computer als Reinst-Form &bdquo;Absoluter Musik&ldquo;<br />Nichts desto trotz m&ouml;chte ich noch erkl&auml;ren, dass Musik im Abstrakten Abbild der Mathematischen Sammlung von Formeln wohl die &bdquo;Absolute-Musik&ldquo; in reinst Form darstellt ( https://data-science-blog.com/blog/2016/12/15/wahrscheinlichkeitsverteilungen-zentraler-grenzwertsatz-verstehen-mit-pyhton/ | https://de.m.wikipedia.org/wiki/Prospect_Theory ). <br />Die Beziehung von &bdquo;Computer-Musik&ldquo; zum Begriff der &bdquo;Absoluten Musik&ldquo;<br />Vom Computer errechnete Musik hat damit immer Zwei Beziehungen zu dieser Musik. <br />Unf&auml;higkeiten des Computers in der Kunst / Musik<br />Zum einen kann kaum ein Computer &ndash; Und bis jetzt wo ich dies Schreibe &ndash; zu einem Verst&auml;ndnis der ihm Umgebenden Umwelt kommen. So etwas wie Welt-Verst&auml;ndnis muss dem Computer noch fremd sein ( https://de.wikipedia.org/wiki/K%C3%BCnstliche_Intelligenz | https://www.springer.com/journal/13218 | https://periodensystem-ki.de/ ). <br />Werden zu Vorteile f&uuml;r die &bdquo;Absolute Musik&ldquo;<br />Gerade dieses Manko wird zum Pluspunkt bei der &bdquo;Absoluten-Musik&ldquo;. Wo etwas nicht sein kann muss auch nicht an den Fall gedacht werden, es w&auml;re so. Dies bleibt jedenfalls solange so bis der erste Computer zu Denken beginnt, und aus Effektivit&auml;t-Gr&uuml;nden den Menschen abschaltet ( https://www.moviepilot.de/movies/the-terminator | https://www.weltderwunder.de/photo_stories/kuenstliche-intelligenz-wie-nah-an-der-wirklichkeit-sind-die-terminator-filme | https://de.m.wikipedia.org/wiki/Terminator_(Film) ). <br />Eine kleine Anekdote zu Terminator<br />Als weitere Anekdote sei hier auf meinem Thread im Philosophie-Forum verwiesen wo wir diese letzte Konsequenz besprochen haben ( https://www.philosophie-raum.de/index.php/Thread/28673-Verpasst-der-Mensch-seine-%C3%9Cberwindung-durch-den-Computer/ ). <br />Schluss Anmerkungen zur berechneten Musik<br />Am Beispiel der Ur-Techno Gruppe &bdquo;Kraftwerk&ldquo;<br />Eine Frage die sich an dieser Stelle ergibt ist eine Anekdote der Ur-Techno Gruppe Kraftwerk ( https://de.wikipedia.org/wiki/Kraftwerk_(Band) | http://www.kraftwerk.com/de/ | https://soundcloud.com/kraftwerk-1970-1973 ). <br />Die Aussage von &bdquo;Kraftwerk&ldquo; zur Berechneten-Musik<br />Hat diese doch gefragt, wird Zuk&uuml;nftige Musik aus Formeln / mit dem Taschen-Rechner in der Hand erschaffen. Nur war dies die Vorausnahme der sp&auml;teren Entwicklung von Techno ( https://www.welt.de/kultur/article5059009/Kraftwerk-sind-auf-dem-Weg-zum-Weltkulturerbe.html | https://www.alumniportal-deutschland.org/deutschland/kultur/kraftwerk-band-kraftwerk-elektronische-musik-musik-techno/ | https://www.redbull.com/de-de/die-geschichte-des-techno ) oder sollte dies die komplette Vorausnahme der klassisch avantgardistischen Musik ala OpenSouds @ WDR3 sein ( https://www1.wdr.de/radio/wdr3/programm/sendungen/wdr3-open-sounds/open-sounds-100.html | https://www.opensounds.eu/ ).<br />Schluss Anmerkungen zur Mikrotonalit&auml;t<br />Das Problem der falschen Mikrotonalit&auml;t<br />Das generelle Paradoxon<br />W&uuml;rde dass nun also so sein, w&uuml;rden diese minimalen Abweichungen zu immer komplexeren Proportionen f&uuml;hren, je exakter ein Instrument gestimmt ist. Und dies w&auml;re ein Paradoxon welches jedes Musizieren verunm&ouml;glichen w&uuml;rde.<br />Besch&auml;ftigung mit diesem Paradoxon<br />Dieses Paradoxon hatte mich in meiner Jugend immer wieder R&auml;tseln lassen. Bis ich die Antwort darin gefunden habe, dass es in der Musik nicht so sehr um konkrete sondern abstraktere Beziehungen zwischen dem gibt, was man den musikalischen Klang nennt. <br />Und aus dem Ger&auml;usch wird Musik<br />Also zu dem was man das Ger&auml;usch nennt das doch schlie&szlig;lich die Musik macht. ( https://www.kita-fachtexte.de/de/fachtexte-finden/geraeusche-suchen-und-musik-erfinden/ | http://geraeuschmusik.com/ | https://www.rwg-neuwied.de/hp/so-sind-wir-organisiert/fachbereiche/kuenstlerische-faecher/musik )</p>\n<p>Harmonische Bedeutung statt kleinster Intervalle<br />Gerade aber dieses Abstrakte der Formel die zur Musik f&uuml;hrt ist auch der Ansatzpunkt f&uuml;r die &Uuml;bertragung des K&uuml;nstlerischen Moments von einer Kunst-Art auf die Andere ( https://www.intermediale-kunsttherapie.net/ | https://de.wikipedia.org/wiki/Intermedialit%C3%A4t | https://de.wikipedia.org/wiki/Medienkunst ).<br />Unm&ouml;glichkeit der Intermedialen &Uuml;bertragung von Kunst auf Basis des Konkreten<br />Es ist praktisch nicht m&ouml;glich Kl&auml;nge in die Bausubstanz von Skulpturen zu verwandeln. Es ist aber m&ouml;glich das K&uuml;nstlerische Prinzip eines Musikst&uuml;ckes auf das Gestaltungsprinzip einer Skulptur zu &uuml;bertragen ( https://de.wikipedia.org/wiki/Experiment | http://www.beckmesser.de/themen/experiment.html | https://www.kunstlinks.de/material/peez/2010-03-michl.pdf ).<br />Das Prinzip dass sich im Kunstwerk Realisiert ist zu finden<br />Dabei wird sich aber oft genug ereignen, dass in einem Kunstwerk erst deren Prinzipien gefunden werden k&ouml;nnen. Wir m&uuml;ssen uns dann fragen, was ist eigentlich das Prinzip / das Wesen in einem solchen Werk das uns selber als Betrachter so beeindruckt / uns zu unserem eigenen Werk f&uuml;hrt.<br />Unterschied zwischen Harmonischer-Funktion in Europa und Semiotischer-Bedeutung in Asien<br />Nun hat man aber &ndash; wenn man ethnologische Studien betrachtet &ndash; gesehen, dass dieses System des Abstrakten vor allem in eurozentrischer Musik gilt.<br />Der Unterschied Asiens<br />So geht man davon aus, dass Asiatische Musik viel mehr Gewicht auf dieses Konkreta legt ( http://www.harekrsna.de/musik.htm | https://de.wikipedia.org/wiki/Chinesische_Musik | http://www.istov.de/htmls/china/china_einleitung.html ). Dies kann meiner Meinung nach nur Verstanden werden wenn man auch auf unsere Basis des Harmonischen Prinzips verzichtet. Ein alternatives System zu diesem Harmonischen Prinzip k&ouml;nnte zum Beispiel in der Semiotik des Ger&auml;usches ( http://agis-www.informatik.uni-hamburg.de/WissPro/auditives/archive/M-Musiksemiotik/kapitel3/sprache-musik-2.html | http://www.stauffenburg.de/asp/books.asp?id=1241 | http://www.thema-journal.eu/index.php/thema/article/download/48/97 ) gefunden werden.</p>\n<p>&nbsp;</p>\n<p>Semiotik statt Funktion<br />Es geht praktisch um eine Abwandlung des Paradigmas der Definitionsweise von Kunst. <br />Semiotik statt Funktion in der Computer-Musik<br />Im Abstrakten werden sich Lambda-Programme und Pr&auml;dikatenlogische Formulierungen finden lassen ( https://praxistipps.chip.de/komponieren-mit-dieser-software-klappts_49447 | https://www.computerbild.de/fotos/Ganz-einfach-Musik-mit-Gratis-Software-selbst-komponieren-4058943.html#6 | https://www.netzwelt.de/download/musik/musik-produktion/index.html ). W&auml;hrend es in diesen &ndash; asiatischen &ndash; Konkreten Werken eher um tats&auml;chliche Semiotische-Netze gehen ( https://www.klang-forscher.de/vergangenes-2015/klang-forscher-in-muenchen.html | https://www.annikas-musikecke.de/musikecke/hoeren/soundscape/ | https://de.wikipedia.org/wiki/Soundscape | https://norient-beta.com/podcasts/soundscape2010/ ). <br />Die Semiotik des Ger&auml;uschs<br />Semiotik aus dem Alltag des Betrachters<br />Diese Semiotik des Ger&auml;usches Bedeutet, dass es darum geht in welcher Funktion uns ein Ger&auml;usch &ndash; oder ein &auml;hnliches &ndash; im Alltag gegen&uuml;ber tritt ( https://norient-beta.com/podcasts/krachundstille/ | http://www.sfu.ca/~truax/OS5.html ). Doch f&uuml;hrt dies zu Fragestellungen die jenseits des Kontextes einer Essay zum Thema warum Chromatische Skala in dieser Form liegt. ( http://www.beckmesser.de/komponisten/cage/praepklavier.html | https://www.nzz.ch/feuilleton/das-praeparierte-klavier-von-john-cage-bis-aphex-twin-ld.1474092 | https://www.kakadu.de/musiktag-das-praeparierte-klavier.2728.de.html?dram:article_id=384086 )<br />Unterschiede zwischen dem Denken in Asien versus Europa<br />Eins sei vielleicht noch Angemerkt, der Asiat erscheint mir als &Uuml;berhaupt viel mehr in Konkreta denkend als der Europ&auml;her ( https://www.dasgehirn.info/aktuell/frage-an-das-gehirn/denken-asiaten-anders-als-beispielsweise-europaeer | https://www.stern.de/gesundheit/p-m--fragen---antworten--denken-asiaten-und-amerikaner-anders-als-europaeer--7358716.html | https://www.derstandard.at/story/1705880/europaeer-ticken-anders-als-asiaten ). <br />Die unterschiedliche Entwicklung von Musik und Kunst in der Geschichte<br />Kunst<br />Und dass Kunst und Musik sich so wie es ihre Geschichte &ndash; von Jetzt gesehen &ndash; zeigt, aus zwei Richtungen entgegen Entwickelt hatten. Die Kunst war einst Abstrakt ( Abstrakte-H&ouml;hlenmalerei ) wurde Konkret ( Christliche Kunst ) und strebt wieder eine neue Abstraktion an ( Piet-Mondrian Kandinsky ). ( https://www.daskreativeuniversum.de/kunstgeschichte/ | https://www.lernort-mint.de/allgemeinwissen/kultur/kunstepochen/ | https://kunstgeschichte.info/ ) <br />Musik</p>\n<p>Die Musik ging von der Nachahmung von Naturlauten aus, entwickelte ihre volle Abstraktion in der Klassik und sucht den Weg zur&uuml;ck zum Konkreten Klang in der Elektroakustischen Kunst ( https://www.peter-locher.de/images/pdf-dateien/musikgeschichte-ueberblick.pdf | https://www.lernort-mint.de/allgemeinwissen/kultur/epochen-der-musik/ | https://www.ejwue.de/fileadmin/posaunen/upload/2009-03-Fachthema-MG-Mittelalter-0803.pdf ).<br />Nicht Wiederholung von Phasen sondern Entwicklung<br />Wohlgemerkt, der Vorausgang des Konkreten vor dem Abstrakten und des Abstrakten vor dem Konkreten macht es uns m&ouml;glich das Abstrakte bzw. Konkrete vor diesem Hintergrund besser zu verstehen ( https://de.wikipedia.org/wiki/Postmoderne | http://www.erlangerliste.de/ressourc/postmod.html | https://www.marxists.org/reference/subject/philosophy/works/fr/lyotard.htm ).<br />Schluss<br />Was es aber gibt<br />Was es aber gibt, ist das Zurecht h&ouml;ren einer Musik im Menschlichen Zuh&ouml;rer ( https://www.dasgehirn.info/wahrnehmen/hoeren/hoeren-mehr-als-nur-schall-und-schwingung | https://www.castano-flamenco.com/fileadmin/user_upload/DAS_HO__REN.pdf | https://www.deutschlandfunk.de/wie-uns-das-ohr-uebers-ohr-haut.740.de.html?dram:article_id=111683 ). Was es auch gibt ist, dass in der Physik Resonanz nicht bei absolut einfachen Proportionen einfach einsetzt sondern sich im Gerade dieser Einfachheit steigert und jenseits wieder schwindet ( https://physik.cosmos-indirekt.de/Physik-Schule/Mathematisches_Pendel | https://www.leifiphysik.de/mechanik/mechanische-schwingungen | https://www.umit.at/data.cfm?vpath=pdf-dokumente/mpbp-examples ). <br />Der Mensch empfindet nicht allein es reagiert auch die Physik<br />Wir finden praktisch auch hier unsere Beziehung zwischen dem Menschlichen Geh&ouml;r im Besonderen und dem allgemeinen Prinzip der Resonanz der Physik.<br />Also<br />Man kann nun also sagen, dass das Geh&ouml;r um so eher eine Klang in der Tonh&ouml;he zurecht h&ouml;rt, je gr&ouml;ber die Schicht ist, aus der dieser Klang seine Harmonische Aufgabe erh&auml;lt ( https://www.gutefrage.net/frage/warum-finden-wir-schiefe-toene-schrecklich | https://www.spin.de/forum/643/-/6333 | https://www.wer-weiss-was.de/t/warum-gibt-es-schiefe-toene-fuers-gehirn/4500920/9 ).</p>\n<p>Also Harmonische Bedeutung<br />Somit kann die eigentliche Subdominante in einer Musik &ndash; die vielleicht gar nicht selber als Klang sondern zum Beispiel als Grundton einer Dreiklangsbrechung in Erscheinung tritt &ndash; selber sehr verstimmt sein. Ein Terz zu diesem Grundton in dieser Dreiklangsbrechung um etwas weniger verstimmt, kann dem Gegen&uuml;ber sehr disharmonisch erscheinen. Wohl gemerkt relativ verstimmt zu diesem Grundton, nicht zu einem Au&szlig;erhalb des Akkords liegenden Tons ( https://de.m.wikipedia.org/wiki/Harmonie | https://viktorjugovic.files.wordpress.com/2014/10/harmonielehre-neu-fc3bcr-gitarristinnen.pdf | https://de.wikipedia.org/wiki/Tonsystem ).<br />Das Subversive der &bdquo;Neuen Kunst&ldquo;<br />Das Denken als Manipuliertes Objekt<br />Wenn man dies so recht betrachtet sieht man, dass es in der Wirk-lichkeit immer um die schon erfolgte Interpretation der Realit&auml;t geht. <br />Die Wahrnehmungs-Schleife<br />Der Mensch wird seine Umwelt immer wieder in seiner Interpretation der Realit&auml;t durch schon erfolgte Wirk-lichkeit erhalten ( https://de.mimi.hu/psychologie/erleben.html | https://www.sprachschule-aktiv-muenchen.de/unterschied-zwischen-erfahren-und-erleben/ | https://de.wikipedia.org/wiki/Erleben ). <br />Geheim-Gesselschaft<br />Die Ma&szlig;nahme des Systems &ndash; des politischen &ndash; ist es uns gar nicht erst zu erm&ouml;glichen diesen K&auml;fig zu verlassen ( https://www.psychotipps.com/unterbewusstsein.html | https://www.angst-panik-hilfe.de/gesundes-denken.html | https://www.lebeblog.de/macht-der-gedanken/ ).<br />Der Weg zur&uuml;ck zu Befreitem Empfinden<br />Dies ist der Ansatzpunkt von dem aus die &bdquo;Neue Kunst&ldquo; als &bdquo;Subversive Kunst&ldquo; uns den Ausweg aus diesem K&auml;fig zu erm&ouml;glichen ( https://de.wikipedia.org/wiki/Subversion | https://www.deutschlandfunk.de/dada-und-die-folgen-subversive-kunst.1184.de.html?dram:article_id=346777 | https://www.ssoar.info/ssoar/bitstream/handle/document/32577/ssoar-psychges-2008-4-ernst-Subversion_-_eine_kleine_Diskursanalyse.pdf?sequence=1 ).<br />P.S.:<br />&Uuml;bertragung auf den Rhythmus<br />Was ich nun hier aber &uuml;ber die Verstimmung der Tonh&ouml;he gesagt habe, kann auch auf das System der Dauer eines Tons &uuml;bertragen werden. Kurz gesagt, je gr&ouml;ber das Rhythmische Raster eines Taktes um so weniger fallen kleine Metrische Inexaktheiten zu Gewicht ( http://www.essl.at/bibliogr/stockhausen.html | https://www.hmdk-stuttgart.de/fileadmin/downloads/Werkverzeichnis_Professoren/Analyse_Musik_des_20._Jh._1__17.09.2013_.pdf | https://de.qwe.wiki/wiki/Serialism ).<br />&Uuml;bertragung zur Seriellen-Musik<br />Und ausgehend von diesem Ansatz ist dies auch f&uuml;r andere Faktoren / Parameter eines Musik-St&uuml;ckes von Bedeutung.<br />( https://www.indiepedia.de/index.php?title=Postserielle_Musik | https://www.capriccio-kulturforum.de/index.php?thread/3293-postserielle-musik-ein-kanon/&amp;s=51db6239f54b71392df1efe14f41ebc7eddf7acd | https://de.wikipedia.org/wiki/Informelle_Kunst )</p>",
        "topics": [
            {
                "id": 360,
                "name": "Major-scale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 358,
                "name": "Microtonal-music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 361,
                "name": "Minor",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 359,
                "name": "Music-scale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17628,
            "forum_user": {
                "id": 17624,
                "user": 17628,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6389f37aeaee190f92e385b6a9b395f6?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "creco",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "preference-for-the-chromatic-sound-scale",
        "pk": 589,
        "published": false,
        "publish_date": "2020-03-21T22:29:39.480279+01:00"
    },
    {
        "title": "Explore Designer Women Kashmiri Stoles for Modern & Ethnic Looks",
        "description": "Women Kashmiri Stoles are known for their rich heritage, elegant designs, and premium craftsmanship. Made using fine wool and traditional techniques, these stoles offer unmatched warmth and timeless style.",
        "content": "<h2>Introduction to Women Kashmiri Stoles</h2>\n<p><a href=\"https://elaboreluxury.com/collections/women-kashmiri-stoles\">Women Kashmiri Stoles</a> are among the most elegant and versatile fashion accessories, known for their luxurious feel and traditional craftsmanship. Originating from the Kashmir region, these stoles are handcrafted by skilled artisans using techniques passed down through generations.</p>\n<p>They are not just winter essentials but timeless fashion pieces that reflect heritage and sophistication.</p>\n<hr>\n<h2>What Makes Women Kashmiri Stoles Unique?</h2>\n<p>The uniqueness of Women Kashmiri Stoles lies in their high-quality materials and intricate designs. Made from fine wool, pashmina, and cashmere, these stoles provide both comfort and elegance.</p>\n<p>Key features include:</p>\n<ul>\n<li>Soft and lightweight texture</li>\n<li>Excellent warmth for cold weather</li>\n<li>Traditional and modern design patterns</li>\n<li>Durable and long-lasting quality</li>\n</ul>\n<hr>\n<h2>Craftsmanship Behind Women Kashmiri Stoles</h2>\n<p>The making of Women Kashmiri Stoles involves a detailed and artistic process:</p>\n<ul>\n<li><strong>Material Selection:</strong> Premium wool and pashmina fibres</li>\n<li><strong>Hand Spinning:</strong> Traditional techniques for fine threads</li>\n<li><strong>Hand Weaving:</strong> Crafted on wooden looms</li>\n<li><strong>Design &amp; Embroidery:</strong> Includes Sozni, Aari, and Kani work</li>\n</ul>\n<p>Each stole is carefully made, often taking weeks to complete, ensuring uniqueness and quality.</p>\n<hr>\n<h2>Types of Women Kashmiri Stoles</h2>\n<p>There are various styles of Women Kashmiri Stoles available:</p>\n<ul>\n<li>Pashmina Kashmiri Stoles</li>\n<li>Embroidered Kashmiri Stoles</li>\n<li>Kani Stoles</li>\n<li>Woollen Kashmiri Stoles</li>\n<li>Zari Work Stoles</li>\n<li>Modern Designer Stoles</li>\n</ul>\n<p>These options cater to both traditional and contemporary fashion preferences.</p>\n<hr>\n<h2>Why Choose Women Kashmiri Stoles?</h2>\n<p>Women Kashmiri Stoles are a perfect investment for style and comfort:</p>\n<ul>\n<li>Ideal for weddings and festive occasions</li>\n<li>Suitable for both ethnic and western outfits</li>\n<li>Provides warmth without heaviness</li>\n<li>Represents luxury and cultural heritage</li>\n</ul>\n<p>These stoles are known for their timeless appeal and premium quality.</p>\n<hr>\n<h2>Styling Tips for Women Kashmiri Stoles</h2>\n<p>You can style Women Kashmiri Stoles in multiple ways:</p>\n<ul>\n<li>Pair with sarees or suits for a traditional look</li>\n<li>Style with jeans and tops for a fusion outfit</li>\n<li>Use as a winter wrap for daily wear</li>\n<li>Add elegance to formal and party outfits</li>\n</ul>\n<hr>\n<h2>Care Tips for Women Kashmiri Stoles</h2>\n<p>To maintain the quality of your Women Kashmiri Stoles:</p>\n<ul>\n<li>Prefer dry cleaning</li>\n<li>Store in breathable fabric bags</li>\n<li>Avoid direct sunlight</li>\n<li>Keep away from moisture and perfumes</li>\n</ul>\n<hr>\n<h2>Conclusion</h2>\n<p>Women Kashmiri Stoles are the perfect combination of tradition, luxury, and modern fashion. Their softness, intricate craftsmanship, and timeless designs make them a must-have accessory for every wardrobe.</p>\n<p>If you want to upgrade your style with elegance and authenticity, Women Kashmiri Stoles are the ideal choice.</p>",
        "topics": [
            {
                "id": 4542,
                "name": "Women Kashmiri Stoles",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166341,
            "forum_user": {
                "id": 166105,
                "user": 166341,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a053613fe6f95130b8e798ec65e5832b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-01T13:44:58.436606+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "elaboreluxury",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "explore-designer-women-kashmiri-stoles-for-modern-ethnic-looks",
        "pk": 4578,
        "published": false,
        "publish_date": "2026-04-02T07:09:05.571675+02:00"
    },
    {
        "title": "Tweak de la semaine - Spécial Noël 2020",
        "description": "Une forme mélodique est générée en combinant deux fonctions mathématiques telles que transporteur/modulateur. Cette forme est réfléchie et décalée pour créer quatre parties. \r\n\r\nAjoutez votre tweak en changeant les paramètres et en cliquant sur \"enregistrer\".",
        "content": "<div style=\"position: relative; padding-bottom: 65%; height: 0;\"><iframe width=\"300\" height=\"150\" style=\"position: absolute; top: 0; left: 0; width: 100%; height: 100%; border: none;\" src=\"https://tweakable.org/embed/examples/matholodical_v1?view=panel\" frameborder=\"0\"></iframe></div>\r\n<div style=\"position: relative; padding-bottom: 65%; height: 0;\"><strong>Cr&eacute;ez votre propre Tweakable sur&nbsp;<a href=\"https://tweakable.org/\">tweakable.org</a>.&nbsp;</strong></div>",
        "topics": [
            {
                "id": 428,
                "name": "Algorithmic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 129,
                "name": "Real time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 426,
                "name": "Tweakable",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 427,
                "name": "Tweakoftheweek",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18424,
            "forum_user": {
                "id": 18417,
                "user": 18424,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d36f7c122c36bf714b376ed2c132c929?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jwvsys",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tweak-of-the-week-christmas-special-2020",
        "pk": 822,
        "published": true,
        "publish_date": "2020-12-17T13:02:01+01:00"
    },
    {
        "title": "Monsters and Vectors: Poetry and Particle-Life",
        "description": "A presentation of two pieces premiered this year which combine RAVE and Somax2 with poetry and A-life simulations.",
        "content": "<p>A short talk on two pieces I have premiered this year: Poem for Ghidorina for alto flute and live electronics, and Velocity Bounce, an acousmatic work with video that via osc integrates T&ouml;lvera, a Python package based on Taichi(Lang) that models A-life behaviors such as flocking, microbial growth, and pulsation. Both works use RAVE and Somax2 - I describe their applications, choice of sounds, and challenges for the live environment. I will show a video trailer of the live performance, show the MAX patches, and introduce the program T&ouml;lvera in more detail.</p>",
        "topics": [
            {
                "id": 2259,
                "name": "Acousmatic Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1825,
                "name": "a-life",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3194,
                "name": "Max 9",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3322,
                "name": "Poetry",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 344,
                "name": "Real-time audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2546,
                "name": "visual arts",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 14895,
            "forum_user": {
                "id": 14892,
                "user": 14895,
                "first_name": "Helen",
                "last_name": "Bledsoe",
                "avatar": "https://forum.ircam.fr/media/avatars/Bledsoe_1.png",
                "avatar_url": "/media/cache/d0/03/d003c24fc9f49a926461b290796e9c30.jpg",
                "biography": null,
                "date_modified": "2025-10-26T20:29:06.188063+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cloudchamber",
            "first_name": "Helen",
            "last_name": "Bledsoe",
            "bookmarks": []
        },
        "slug": "monsters-and-vectors-poetry-and-particle-life",
        "pk": 3646,
        "published": false,
        "publish_date": "2025-08-29T14:03:44.392742+02:00"
    },
    {
        "title": "Towards an experimental performance of Harald Bode’s \"Phase 6\" by Juan Parra Cancino",
        "description": "This research centres on Harald Bode's musical output, specifically\r\nfocusing on the reconstruction and preparation for a performance of the first section of his\r\nPhase 6. Through this experimental interpretation, the aim is to learn from Bode’s compositions and recordings, thereby deepeningand broadening our understanding of his creative process beyond the musical tools he\r\ndesigned.",
        "content": "<h5 id=\"➡️-this-presentation-is-part-of-ircam-forum-workshops-paris-engh\"><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<p class=\"cvGsUA direction-ltr align-justify para-style-body\"><span class=\"a_GcMg font-feature-liga-off font-feature-clig-off font-feature-calt-off text-decoration-none text-strikethrough-none\">Harald Bode&rsquo;s impact on the history of electronic music, particularly through his innovative design of musical instruments, is undeniable. A close examination of his diaries and interviews reveals his continuous pursuit of being a &lsquo;well-rounded&rsquo; individual, which involved nurturing his creativity across various domains. This research centres on Bode's musical output, specifically focusing on the reconstruction and preparation for a performance of the first section of his Phase 6. The text explores the material analysis and selection process for creating a performance score while also detailing the instrumental setup designed to reproduce and extend this work faithfully in a live performance context. Through this experimental interpretation, the aim is to learn from Bode&rsquo;s compositions and recordings, thereby deepening and broadening our understanding of his creative process beyond the musical tools he designed.</span></p>\r\n<p class=\"cvGsUA direction-ltr align-justify para-style-body\"><span class=\"a_GcMg font-feature-liga-off font-feature-clig-off font-feature-calt-off text-decoration-none text-strikethrough-none\"><img src=\"/media/uploads/call-parisenghien-juan-parracancino-projectpicture1.jpg\" alt=\"\" width=\"902\" height=\"601\" /></span></p>\r\n<p class=\"cvGsUA direction-ltr align-justify para-style-body\"></p>\r\n<p class=\"cvGsUA direction-ltr align-justify para-style-body\"></p>",
        "topics": [
            {
                "id": 3934,
                "name": "Early electronics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3935,
                "name": "Harald Bode",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3936,
                "name": "Performance of Early Electronic Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27798,
            "forum_user": {
                "id": 27770,
                "user": 27798,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/JUANOPERA.png",
                "avatar_url": "/media/cache/c7/62/c7624839fd9abf2ecdda050db1e1a048.jpg",
                "biography": "Juan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at The Royal Conservatoire The Hague (NL), where he obtained his Masters degree with focus on composition and performance of electronic music. In 2014, Juan obtained his PhD degree from Leiden University with his thesis “Multiple Paths: Towards a Performance practice in Computer Music”. His compositions have been performed in Europe, Japan, North and South America. Founder of The Electronic Hammer, a Computer and Percussion trio and Wiregriot, (voice & electronics), he collaborates regularly with Ensemble KLANG (NL) and Hermes (BE), among many others. His work in the field of live electronic music has made him recipient of numerous grants such as NFPK, Prins Bernhard Cultuurfonds and the International Music Council. Since 2009 Juan has been appointed as a joint researcher of the Orpheus Institute Research Centre in Music to work on the topics of creativity and performance applied to electronic music.\n\nJuan has recently been appointed as Regional Director for Europe of the International Computer Music Association for the period 2022-2026.",
                "date_modified": "2025-12-25T18:30:21.585575+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 670,
                        "forum_user": 27770,
                        "date_start": "2025-12-19",
                        "date_end": "2026-12-19",
                        "type": 0,
                        "keys": [
                            {
                                "id": 205,
                                "membership": 670
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "jotaparra",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 27798,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2754,
                    "user": 27798,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "towards-an-experimental-performance-of-harald-bodes-phase-6-by-juan-parra-cancino",
        "pk": 4112,
        "published": true,
        "publish_date": "2025-12-25T18:29:06+01:00"
    },
    {
        "title": "G3P Human Instruments - Sonnie Carlebach, Ushara Dilrukshan and Thomas Bugg.",
        "description": "Human Instruments propose une performance en trio mêlant manipulation de cassettes, codage en direct et expériences audiovisuelles génératives. Deux musiciens électroniques créent un voyage sonore allant de pads ambiants à des bruits dévastateurs, accompagnés de visuels glitching, explorant le dialogue entre les domaines analogique et numérique.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par:&nbsp;Sonnie Carlebach, Ushara Dilrukshan et&nbsp;Thomas Bugg.<br /><a href=\"https://forum.ircam.fr/profile/sonniecarlebach/\">Biographie Sonnie Carlebach</a><br /><br /></p>\r\n<p>Human Instruments est une performance en trio comprenant la manipulation de cassettes, le codage en direct et des exp&eacute;riences audiovisuelles g&eacute;n&eacute;ratives.</p>\r\n<p>Deux musiciens &eacute;lectroniques composent un bruit &eacute;th&eacute;r&eacute; et &eacute;lectrifi&eacute; de code de bande en envoyant le son d'une machine &agrave; cassette &agrave; quatre pistes &agrave; travers des effets et des manipulations dans un ordinateur qui code en direct sur Super Collider en temps r&eacute;el. Avec les nouveaux sons produits &agrave; la source par Super Collider, cela a produit un voyage sonore traversant des pads ambiants, textur&eacute;s et d&eacute;sharmonis&eacute;s par la distorsion unique des quatre pistes, et se catalysant en un bruit d&eacute;vastateur lorsque la saturation de la cassette est ramen&eacute;e &agrave; la vie, puis assassin&eacute;e par le bruit cod&eacute; expressif et d'une autre nature.</p>\r\n<p>Tous ces sons sont envoy&eacute;s en direct via touch-designer pour produire des visuels intenses et glitching qui compl&egrave;tent le ton de la performance.</p>\r\n<p>Voyagez avec nous de l'analogique au num&eacute;rique et au bruit cybern&eacute;tique, alors que les machines communiquent simultan&eacute;ment avec les musiciens qui tentent de garder le contr&ocirc;le tout en poussant les fonctions de conception des instruments physiques et cod&eacute;s binaires &agrave; leur limite. Regardez les joueurs eux-m&ecirc;mes devenir des courants d'acc&egrave;s &agrave; partir des machines pour exprimer leurs pens&eacute;es les plus profondes, alors que des technologies distantes de plusieurs d&eacute;cennies tentent de communiquer entre elles, en passant par la bo&icirc;te de traduction d'un instrument humain.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1248,
                "name": "Ambience sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 908,
                "name": "ambient complexity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1863,
                "name": "ethereal",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1146,
                "name": "experimental music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1809,
                "name": "live coding",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1094,
                "name": "London",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1804,
                "name": "loop",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1862,
                "name": "noise",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1832,
                "name": "performing arts",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 274,
                "name": "Soundart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 909,
                "name": "soundcollage",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 108,
                "name": "Sound deconstruction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 815,
                "name": "soundscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1814,
                "name": "supercollider",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1803,
                "name": "synthese granulaire ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1861,
                "name": "Tascam",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 273,
                "name": "Touchdesigner",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55138,
            "forum_user": {
                "id": 55075,
                "user": 55138,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/85f07c071b9ec28d30941afe5e804833?s=120&d=retro",
                "biography": "I am a mixed-media artist working in sound, moving image, and collage. I grew up in the Jewish community in London and then lived in Edinburgh for several years.  I began as a musician learning bass and then moved onto produce a range of music. From there I delved into  electronic music and sound art, using analogue tape machines and synths to create more experimental and expressive music. In that time I've produced several EP's and Albums ranging from ambient, to folk, to techno as well as working on soundtracks and sound design for film and video game. \nIn September of 2023 I began studying at the Royal COllage of Art, doing a masters in Information Experience Design. In this time I have began work on several projects and extended mediums including a short film using still images off of slide projectors, a large scale instillation exploring an individuals relationship to the city including illustration, cartography, sound design, written work and sculpture to produce an immersive multi-sensory experience.",
                "date_modified": "2024-03-17T13:37:42.367556+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sonniecarlebach",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "g3p-human-instruments",
        "pk": 2790,
        "published": true,
        "publish_date": "2024-03-03T19:37:30+01:00"
    },
    {
        "title": "Sonic Wings: A Wearable Live Electronics Device for Performing Mixed Music by Luciana Perc",
        "description": "The Sonic Wings are a wearable device that can capture audio, apply computational processes, and output new audio data in real time. This demo presents the device’s construction and the performance of a stochastic solo piece of mixed music for flute and live electronics using the device.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>Mounted to the performer&rsquo;s body, this system allows the performer to move freely while playing and interacting with the device across a performance space in which audiences<br />stand and move spontaneously. This study&rsquo;s approach to interacting with body-mounted interfaces in musical performance engages with the figure of the cyborg as proposed by Haraway, exploring the embodiment of hybrid ontologies within musical performance.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/95f0726c02ddcf488f075cab255a1b2d.jpg\" /><a href=\"https://www.researchgate.net/publication/393898444_Sonic_Wings_A_Wearable_Live_Electronics_Device_for_Performing_Mixed_Music\">https://www.researchgate.net/publication/393898444_Sonic_Wings_A_Wearable_Live_Electronics_Device_for_Performing_Mixed_Music</a></p>",
        "topics": [
            {
                "id": 4158,
                "name": "Audience-performer interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1097,
                "name": "mixed music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4160,
                "name": "new materialism",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4163,
                "name": "performer-technology interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4161,
                "name": "wearable speakers",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4162,
                "name": "wearable technologies",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 23499,
            "forum_user": {
                "id": 23473,
                "user": 23499,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/D92AF086-E498-472C-8134-DA2B98EDDE73-scaled.jpeg",
                "avatar_url": "/media/cache/14/71/14712758ca48374a20fe2def65134c88.jpg",
                "biography": "Luciana is a composer, performer and researcher creatively exploring technologies, such as live and fixed electronics and video, as well as the boundaries between art forms, namely instrumental theatre, intermedial performance and sound art. Her work has been recently presented at Line Upon Line’s Winter Composer Festival (Austin), Festival MÀD (Bordeaux) by Proxima Centauri, NIME (Utrecht), IRCAM’s Forum Workshops, Cite des Arts Paris, Darmstädter Ferienkurse, Tête-à-Tête: The Opera festival (London), Music of the Americas (NYC) by Ensemble 2e2m and San Diego Opera’s OperaHack 3.0 (US), Darmstädter Ferienkurse Open Space, Musikfestival Bern, Acht Brücken Festival Cologne, Playtime Festival, Gare du Nord Basel, Société de Musique Contemporaine Lausanne, three editions of CICTEM, and Centro Nacional de la Música. A fellow of the HEA, her teaching activity recently took place at London College of Communication’s (UAL) Guest Lectures Series, Latin Elephant’s Community Music Ensemble, the School of Creative Technologies (UoP), the Outreach department of the Festival d’Aix (France), Trinity Laban’s Learning and Participation department and Royal Central School of Speech and Drama.",
                "date_modified": "2026-02-03T17:51:24.708397+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lucianap",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sonic-wings-a-wearable-live-electronics-device-for-performing-mixed-music",
        "pk": 4307,
        "published": true,
        "publish_date": "2026-02-03T17:54:42+01:00"
    },
    {
        "title": "The Symphony of Civilisation by Jeanyoon Choi & Suhyun Lim",
        "description": "The Symphony of Civilisation is a multi-device web artwork, encompassing more than ten channels in a symphonic format. Structured in four movements, it offers a loosely connected cross-section abstract representation of civilisation’s past, present, and future within an immersive setting.",
        "content": "<h2 id=\"the-symphony-of-civilisation-by-jeanyoon-choi-suhyun-lim\">The Symphony of Civilisation by Jeanyoon Choi &amp; Suhyun Lim</h2>\n<p>Our civilisation is brilliant, unfathomable almost. Reflect on humanity fulfilling the dream of flight &ndash; the evolution of marvellous transportation enveloping the spatial-temporal dimension we inhabit. Look at Artificial Intelligence, a complex system we built but operates beyond our comprehension. Observe the new epoch marked at Crawford Lake, signalling the time our civilisation dominates ecology. Welcome to the Anthropocene</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0c3ae80773547b154f2f901718b04ac3.jpg\"></p>\n<p>It is undeniable we are living in a golden age. But are we? Isn't our fragmented existence leading to mental illness? What about the persistent conflicts around the world? What about the accelerating climate crisis or the rise of numbers over humaneness? These are indeed troubling times, as civilisation seems more at risk of decline than of flourishing, with the Doomsday Clock standing at 90 seconds to midnight.</p>\n<p>Where are we headed? How can we resolve these issues for the present and the future?</p>\n<p>Composed in four movements, the Symphony of Civilisation mirrors the symphonic format where each represents a certain period of humanity: Ancient, Post-Industrial, Contemporary, and Future. Rather than explicitly illustrating these eras, the symphony presents four contrasting cross-sections of civilisation. Just as traditional symphonies didn't narrate their stories directly but communicated the composer's intention through melodies and rhythms, this new multi-device web symphony&rsquo;s audio-visual scape is designed to poetically immerse audiences through audio-visual outputs, creating harmony from multiple channels.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/805ad99311311e82968661f220f7d3f6.jpg\"></p>\n<p>The first three movements illustrate accelerationism, depicting an accelerating rhythm and tension. This suggests that our civilisation is accelerating beyond control, as hyperobjects exist beyond our understanding. Movement Three, for instance, highlights this theme with screens flickering rapidly across all channels at a faster speed beyond our perception. Each screen symbolises our fragmented and segmented contemporary world, all trying to optimise and shine in its own direction yet failing to improve society as a whole; We are still not clever enough to realise that optimising parts doesn't equal optimising the whole. Welcome to this accelerating zero-sum game. How will this end? Should we accelerate further, thus accelerating the catastrophe, as Marxist Accelerationists once claimed?</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/05d8e14bb7e400b9d7dab03b547ef176.jpg\"></p>\n<p>Here emerges the philosophy of Dionysian togetherness within the context of this artwork. Movement Four, the finale of the symphony, portrays a speculative future where individual boundaries fade and people are collectively immersed in a Dionysian experience. This is the only interactive movement, where audience members co-compose the symphony - contrasting sharply with the previous movement, where chaos emerges but audiences had no control other than merely passive spectators. In this fourth movement, audiences scan a QR code and conduct the symphony from their mobiles. The harder the phones are shaken, the louder the audiovisual experience becomes. Audience members&rsquo; faces are collaged from different angles and appear on the projector holistically, creating a profound sense of immersion and eliciting primal goosebumps. This immersive experience suggests that the future of civilisation should emerge from Dionysian togetherness. The symphony concludes that the alternative future towards envisioning Dong-Dong can be cultivated by our own hands, aspiring towards a brighter collective future, one where harmony arises from disharmony and collectiveness emerges from individualism.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/717a2e14612dc5a413bd8d1583badc89.jpg\"></p>\n<p>&nbsp;</p>\n<p>As a multi-device web symphony, The Symphony of Civilisation incorporates more than ten channels - including projectors, large displays, laptops, and audiences&rsquo; mobiles - each acting as an interconnected audio-visual instrument conducted from a single laptop. The harmony created from multiple devices forms a unique audiovisual landscape depicting the past, present, and future of civilisation, reminiscent of portraying the cityscape of each era. The four movements of the symphony each represent the Ancient, Post-Industrial, Present, and Future periods in chronological order.&nbsp;</p>\n<p>The First Movement: The Birth, symbolises the dawn of ancient civilisations worldwide. It begins with pure noise - a representation of nature and pure ignorance - soon transformed into vertical stripes, symbolising the birth of the artificial from the wild. Subsequently, a series of ancient architectural forms are displayed atop these stripes. With an accelerating rhythm, different architectures from early civilisations around the world are illustrated - from the Egyptians to the Silla Dynasty, depicting the birth of diverse civilisations worldwide.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1ed7b3b52b3ee79a3690f2e7cc175cf0.jpg\"></p>\n<p>The Second Movement: The Rise, illustrates the accelerating progress of civilisation within the post-Industrial era. We particularly focus on the evolution of transportation modes - from steamboats to jet planes - which have transformed the spatial-temporal dimension humanity inhabits, heavily influencing industrial civilisation. This movement features imagery generated by Midjourney, producing photorealistic images in a circular layout, poetically depicting the evolution of transportation and civilisation - as well as highlighting the homogeneity, contrasted to the diverseness of the earlier movement.&nbsp;</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6a0508369e6ac8694a5cade7aa716343.jpg\"></p>\n<p>The Third Movement: The Rhythm, depicts contemporary civilisation. Various aspects of digital consumerism - anonymous SNS profiles, shopping malls, online stocks, advertisements, notifications, delivery apps, memes, and ranking systems - are shown rhythmically across all channels. Initially uniform at 120 beats per second, the Tone.js-generated rhythm becomes gradually irregular and non-linear, with unprecedented acceleration and deceleration. Audiences experience immersive chaos across all channels, all altered following a single repetitive yet unstoppable rhythm. This chaos across various channels represents the segmented and highly individualised contemporary civilisation, where all screens - all individuals - strive for their own success with full effort, which actually leads nowhere - depicting the gigantic zero-sum game we reside within.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/076860a6060f8b420d9782fff43dc4fa.jpg\"></p>\n<p>The Fourth Movement: The Dionysian, is the symphony's most interactive composition. Initiated from silence, it invites audiences to scan a QR code displayed on the screen. They are then prompted to shake their phones, with the mobile accelerometer causing the surrounding screens to brighten, and the Mahler No.1 Symphony to enlarge with each shake, fulfilling and augmenting the space. Webcams from different channels are interconnected through WebRTC, mashing faces from different angles across all screens. This collective experience allows many audiences to conduct and control the entire room together embodying Nietzsche's notion of the Dionysian Immersiveness. It is the most poetic, interactive, communal, and hopeful movement of all symphony, with its communal interactiveness sharply contrasted to the previous movement, highlighting the theme and importance of togetherness.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a843299b4ab63aa2321982d7f5df9286.jpg\"></p>\n<p>The symphony utilises multiple desktop-connected projectors, over five laptops, and audiences&rsquo; mobile phones to compose a Multi-Device Web Symphony. Audiences are invited to use their own laptops and mobiles to participate and co-create this symphony. This presents an experimental form of new media art, situated hybridly between installation and performance. We believe that as techno-humans, exploring digital media in novel ways beyond AI's capabilities depicts the crucial factor of humanness. We hope that the Multi-Device Web Symphony, unlike single-device experiences, can present the potential of both interactivity and collaborative immersiveness converged.</p>\n<p>Why present civilisation as the first Multi-Device Web Symphony? Numerous media artworks depict the future through descriptive speculations, often with highly polarised utopian or dystopian visions. Alternatively, we believe that the complexity of the contemporary world requires a more conceptual approach. We wanted to create a work that facilitates subtle reflection among audiences on the past and present of civilisation, guiding them towards an interactive and communal future by the end of the symphony. This idea led to the creation of a chronicle of civilisation in a symphonic format.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/55bbb2d48dfc48d428adbf2b20ad07b9.jpg\"></p>\n<p>The technical production of this artwork also aligns with these principles. Multi-channel screens showcase different aspects of civilisation, enabling the construction of a multi-layered composition that traditional moving images cannot produce. Various Javascript-based Frontend frameworks - React.js, Next.js, Styled Components -&nbsp; were employed for this composition. Specifically, we employed a Mobile Accelerometer propagated over WebSocket in real-time within Movement Four to give audiences an experience of conducting the whole surroundings by shaking their mobile. This symbolically highlights the importance of Sartrean &lsquo;Engagement&rsquo; towards the futuristic Dionysian vision we shall all co-create.&nbsp;</p>\n<p><br>This symphony will be premiered during the IRCAM Seoul Forum.</p>",
        "topics": [
            {
                "id": 2316,
                "name": "Jeanyoon Choi",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2319,
                "name": "Multi-Device Web Artwork",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2317,
                "name": "Suhyun Lim",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2320,
                "name": "Symphony of Civilisation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2318,
                "name": "The Symphony",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 85272,
            "forum_user": {
                "id": 85171,
                "user": 85272,
                "first_name": "Suhyun",
                "last_name": "Lim",
                "avatar": "https://forum.ircam.fr/media/avatars/ircam_d7X9Tok.jpg",
                "avatar_url": "/media/cache/64/17/641714381bde91cc99a4cf9b44fa4145.jpg",
                "biography": "A visual communication designer committed to provoking critical reflections on societal issues. With a BFA in Visual Communication Design from Chung-Ang University and current studies in the Industrial Design department at KAIST, the aim is to expand graphic expression methodologies that integrate design and technology.",
                "date_modified": "2025-07-10T10:49:48.460535+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "suhyunlim",
            "first_name": "Suhyun",
            "last_name": "Lim",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3042,
                    "user": 85272,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "the-symphony-of-civilisation-by-jeanyoon-choi-suhyun-lim",
        "pk": 3047,
        "published": false,
        "publish_date": "2024-10-22T02:17:47.246681+02:00"
    },
    {
        "title": "MYcorrhizal - Laura Selby, Yueshen Wu, Devanshi Rungta",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p><strong>MYcorrhizal </strong>is an extended-reality sonic experience mapping an interconnection between mycelium and beings. A duet of worlds between data signals, entities and scales of existence. Audiences can encounter and influence the sonic ecosystem around them, and reflect upon their role in the acoustic ecology of the spaces we exist within.</p>\r\n<p>The installation is inspired by the mycorrhizal bridges that exist within the forests of our Earth; connecting root, mycelium, and organism, enabling the recourse distribution of data, nutrients and memory. Visitors are invited to consider the ways we can become consciously entangled within these worlds. In what ways do we as beings already influence the worlds that exist around us? What traces do we leave behind?&nbsp;<br />Through sound we can traverse temporally to worlds and species that on our human scale are seemingly invisible. Through revealing the acoustic ecologies around us and hearing the effects our traces leave, can we form our own mycorrhizal connections?</p>\r\n<p>Presented at the centre is a quadraphonic sonic sculpture, emitting fragments of an ecosystem combined with mycelium electrical spiking activity collected by Professor Andrew Adamatzky*. A global soundscape surrounds taking the sonic data of the ecosystem, the mycelium impulses and the live tracking of CO2 levels in the exhibit space to produce an evolving, learning, sonic landscape. By employing machine learning to integrate these different data sources, the generative soundscape is transformed by the passive and active interactions of the audience, including their breath and touch on the textile. The piece utilises spatialisation not only as a compelling storytelling tool but as a way to extend the reality of the generative ecosystem demonstrated.</p>\r\n<p>* FUNGAR. (2021). Datasets of recordings of electrical activity of substrates colonised by oyster fungi P. ostreatus and P. djamor. [Data set].<b><br /></b></p>\r\n<p>Laura Selby, Yueshen Wu, <a href=\"https://forum.ircam.fr/profile/devanshirungta/\">Devanshi Rungta</a></p>",
        "topics": [
            {
                "id": 1232,
                "name": "Acoustic ecology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1229,
                "name": "Data Signals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1231,
                "name": "Ecosystem",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1228,
                "name": "extended reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27330,
            "forum_user": {
                "id": 27302,
                "user": 27330,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7767643debb18e240531e7f4d2600bfd?s=120&d=retro",
                "biography": "Laura Selby is a London-based sound artist and violinist, with a background working in sound and music design for film, TV, immersive platform and theatre. Recent works include audio-visual haptic installation MYcelium at IRCAM forum (2022), Birth Rites score for the Designer in Residence Exhibit Ultima Thule at the London Design Museum (2019), ambisonic composition Shirley Dawn at London IKLECTIK and the Everyday is Spatial Immersive Audio Conference (2022) and most recently exhibiting A Room With A View installation for the LG OLED exhibit Luminous (2022). Current research explores the sonification and connection of varying scales of communication in space and time, creating multi-sensory installation works, utilising extended field recording techniques, audio immersion and musical composition.",
                "date_modified": "2024-05-23T15:49:18.810151+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lauraselby",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "mycorrhizal-1",
        "pk": 2153,
        "published": true,
        "publish_date": "2023-03-21T17:46:52+01:00"
    },
    {
        "title": "Women Embroidered Stoles – A Perfect Blend of Art & Fashion",
        "description": "Women Embroidered Stoles are a perfect combination of elegance and craftsmanship. Designed with intricate embroidery on premium fabrics like pashmina and wool, these stoles offer both warmth and style.",
        "content": "<h2>Introduction to Women Embroidered Stoles</h2>\n<p><a href=\"https://elaboreluxury.com/collections/women-embroidered-stoles\">Women Embroidered Stoles</a> are one of the most elegant and versatile fashion accessories, known for their intricate designs and luxurious feel. These stoles are crafted using premium fabrics like pashmina, wool, and cashmere, and decorated with detailed embroidery that reflects traditional artistry.</p>\n<p>They are not just winter accessories but statement pieces that enhance your overall style.</p>\n<hr>\n<h2>What Makes Women Embroidered Stoles Special?</h2>\n<p>The uniqueness of Women Embroidered Stoles lies in their craftsmanship and artistic detailing. Each stole is carefully designed with embroidery patterns such as florals, paisleys, and traditional motifs.</p>\n<p>Key features include:</p>\n<ul>\n<li>Intricate hand embroidery work</li>\n<li>Soft and lightweight fabric</li>\n<li>Elegant and timeless appeal</li>\n<li>Perfect balance of warmth and style</li>\n</ul>\n<p>These stoles combine comfort with luxury, making them a popular choice worldwide.</p>\n<hr>\n<h2>Craftsmanship Behind Women Embroidered Stoles</h2>\n<p>The making of Women Embroidered Stoles involves a detailed and time-consuming process:</p>\n<ul>\n<li><strong>Material Selection:</strong> High-quality pashmina or wool</li>\n<li><strong>Hand Weaving:</strong> Crafted on traditional looms</li>\n<li><strong>Embroidery Work:</strong> Techniques like Sozni, Aari, and Tilla</li>\n<li><strong>Finishing:</strong> Careful detailing and quality checks</li>\n</ul>\n<p>This handcrafted process ensures that every stole is unique and premium.</p>\n<hr>\n<h2>Types of Women Embroidered Stoles</h2>\n<p>There are various styles of Women Embroidered Stoles available:</p>\n<ul>\n<li>Sozni Embroidered Stoles</li>\n<li>Aari Work Stoles</li>\n<li>Zari Embroidered Stoles</li>\n<li>Floral Embroidered Stoles</li>\n<li>Kashmiri Designer Stoles</li>\n</ul>\n<p>Each type offers a different look, from subtle elegance to bold statement designs.</p>\n<hr>\n<h2>Why Choose Women Embroidered Stoles?</h2>\n<p>Women Embroidered Stoles are an ideal choice for fashion lovers who value both style and tradition:</p>\n<ul>\n<li>Perfect for weddings and festive occasions</li>\n<li>Suitable for both ethnic and western outfits</li>\n<li>Lightweight yet warm</li>\n<li>Long-lasting and timeless</li>\n</ul>\n<p>According to fashion insights, embroidered pashmina pieces are valued for their heritage craftsmanship and luxury appeal.</p>\n<hr>\n<h2>Styling Tips for Women Embroidered Stoles</h2>\n<p>You can style Women Embroidered Stoles in different ways:</p>\n<ul>\n<li>Pair with sarees or suits for a traditional look</li>\n<li>Combine with western outfits for fusion styling</li>\n<li>Drape over shoulders for an elegant appearance</li>\n<li>Use as a statement accessory in winter</li>\n</ul>\n<hr>\n<h2>Care Tips for Women Embroidered Stoles</h2>\n<p>To maintain your Women Embroidered Stoles:</p>\n<ul>\n<li>Dry clean for best results</li>\n<li>Store in breathable fabric bags</li>\n<li>Avoid direct sunlight</li>\n<li>Keep away from moisture and perfumes</li>\n</ul>\n<hr>\n<h2>Conclusion</h2>\n<p>Women Embroidered Stoles are the perfect combination of luxury, craftsmanship, and timeless fashion. Their intricate embroidery, soft texture, and elegant designs make them an essential accessory for every wardrobe.</p>\n<p>If you want to enhance your style with sophistication and tradition, Women Embroidered Stoles are the perfect choice.</p>",
        "topics": [
            {
                "id": 4543,
                "name": "Women Embroidered Stoles",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166341,
            "forum_user": {
                "id": 166105,
                "user": 166341,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a053613fe6f95130b8e798ec65e5832b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-01T13:44:58.436606+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "elaboreluxury",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "women-embroidered-stoles-a-perfect-blend-of-art-fashion",
        "pk": 4579,
        "published": false,
        "publish_date": "2026-04-02T07:21:31.928585+02:00"
    },
    {
        "title": "Latest software developments from the EAC Research Team (Acoustics & Cognition) by Thibault Carpentier (IRCAM)",
        "description": "",
        "content": "<div><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></div>\r\n<div></div>\r\n<div>In this short presentation, we will introduce the latest software developments from the EAC Research Team (Acoustics &amp; Cognition).</div>\r\n<div>These improvements concern all aspects of the toolbox: GUI objects, DSP components, command line tools, documentation and tutorials.</div>\r\n<div>These releases include a number of new features, bug fixes, and other improvements.</div>\r\n<div></div>\r\n<div><img src=\"/media/uploads/cristal_cnrs_2018_2-550x308.jpg\" alt=\"\" width=\"550\" height=\"308\" /></div>",
        "topics": [],
        "user": {
            "pk": 92,
            "forum_user": {
                "id": 92,
                "user": 92,
                "first_name": "Thibaut",
                "last_name": "Carpentier",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5200b4214a3aff548eef81f9d804ae8b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-20T10:51:45.860663+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 446,
                        "forum_user": 92,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "tcarpent",
            "first_name": "Thibaut",
            "last_name": "Carpentier",
            "bookmarks": []
        },
        "slug": "latest-software-developments-from-the-eac-research-team-acoustics-cognition-by-thibault-carpentier-ircam",
        "pk": 4412,
        "published": true,
        "publish_date": "2026-02-20T14:49:27+01:00"
    },
    {
        "title": "Exquis a new MPE controller",
        "description": "Intuitive Instruments, a french based company, is proud to announce Exquis, a new MPE controller and app, based on their unique hexagonal keyboard. They have launched the pre-orders on Kickstarter.",
        "content": "<p>The Intuitive Instruments (formally Dualo) company has launched a <a href=\"https://www.kickstarter.com/projects/dualointuitiveinstru/exquis-the-smartest-way-to-create-expressive-music\">Kickstarter campaign</a> to fund production of the Exquis, a expressive MPE controller that features a hexagonal isomorphic keyboard, MIDI In/Out, CV/Gate out for controlling analog gear, and a variety of configurable controls.</p>\r\n<p><img alt=\"\" src=\"/media/uploads/user/2a080bcac0fd17e1d22cdbcf3d9e4c74.png\" /></p>\r\n<p>The Exquis features 54 expressive pads, 4 clickable endless knob controllers, a touch slider &amp; 6 configurable buttons. The pads give you continuous per-note control over pressure and pitch.</p>\r\n<p>The Exquis application &ndash; available for Windows, Mac OS, iOS, Android, Linux an Raspberry Pi OS &ndash; is designed to be the perfect companion for the keyboard, as well as a MIDI-compatible DAW.<br />At only ~200&euro;, it's one of the cheapest MPE controller available on the market.</p>\r\n<p><a href=\"https://www.kickstarter.com/projects/dualointuitiveinstru/exquis-the-smartest-way-to-create-expressive-music\">Their Kickstarter campaign is already a success</a>, with already +145k&euro; gathered. The campaign will end on Sunday the 13th of November at 17h CET.</p>",
        "topics": [
            {
                "id": 326,
                "name": "Control",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 307,
                "name": "Expressive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1000,
                "name": "hexagonal",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 74,
                "name": "Midi",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 999,
                "name": "MPE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 33353,
            "forum_user": {
                "id": 33305,
                "user": 33353,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4bd9c1ba90003d2c26f7d6d0d6a22f09?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "brunointuitiveinstru",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "exquis-a-new-mpe-controller",
        "pk": 1972,
        "published": true,
        "publish_date": "2022-11-09T16:10:22+01:00"
    },
    {
        "title": "Discover Luxury Pashmina Women Scarf for Stylish Winter Fashion",
        "description": "Pashmina Women Scarf is a luxurious fashion accessory known for its ultra-soft texture and lightweight warmth. Crafted from fine Himalayan wool, it offers elegance and comfort for all seasons.",
        "content": "<h2>Introduction to Pashmina Women Scarf</h2>\n<p><a href=\"https://elaboreluxury.com/collections/pashmina-women-scarf\">Pashmina Women Scarf</a> is one of the most elegant and versatile fashion accessories, known for its unmatched softness and luxurious feel. Made from fine wool sourced from the Himalayan region, this scarf is lightweight yet incredibly warm.</p>\n<p>It is not just a winter accessory but a timeless piece that reflects sophistication and heritage craftsmanship.</p>\n<hr>\n<h2>What Makes Pashmina Women Scarf Special?</h2>\n<p>The uniqueness of Pashmina Women Scarf lies in its premium material and handcrafted quality. Each scarf is carefully woven to maintain softness, durability, and elegance.</p>\n<p>Key features include:</p>\n<ul>\n<li>Ultra-soft and lightweight texture</li>\n<li>Excellent warmth without heaviness</li>\n<li>Elegant and versatile design</li>\n<li>Suitable for all seasons</li>\n</ul>\n<hr>\n<h2>Craftsmanship Behind Pashmina Women Scarf</h2>\n<p>Creating a Pashmina Women Scarf involves skilled craftsmanship and traditional techniques:</p>\n<ul>\n<li><strong>Wool Collection:</strong> Fine fibres sourced from Himalayan goats</li>\n<li><strong>Hand Spinning:</strong> Maintains softness and quality</li>\n<li><strong>Hand Weaving:</strong> Crafted using traditional looms</li>\n<li><strong>Finishing:</strong> Detailed quality checks and finishing touches</li>\n</ul>\n<p>This process ensures every scarf is unique and premium.</p>\n<hr>\n<h2>Types of Pashmina Women Scarf</h2>\n<p>There are different styles of Pashmina Women Scarf available:</p>\n<ul>\n<li>Pure Pashmina Scarf</li>\n<li>Printed Pashmina Scarf</li>\n<li>Embroidered Pashmina Scarf</li>\n<li>Zari Work Scarf</li>\n<li>Lightweight Fashion Scarf</li>\n</ul>\n<p>Each style offers a unique blend of traditional and modern fashion.</p>\n<hr>\n<h2>Why Choose Pashmina Women Scarf?</h2>\n<p>Pashmina Women Scarf is a perfect choice for those who value comfort and elegance:</p>\n<ul>\n<li>Ideal for both casual and formal wear</li>\n<li>Perfect for travel and daily use</li>\n<li>Lightweight yet warm</li>\n<li>Long-lasting and timeless</li>\n</ul>\n<p>It is a must-have accessory for every wardrobe.</p>\n<hr>\n<h2>Styling Tips for Pashmina Women Scarf</h2>\n<p>You can style your Pashmina Women Scarf in multiple ways:</p>\n<ul>\n<li>Wrap around the neck for a cozy look</li>\n<li>Drape over shoulders for elegance</li>\n<li>Pair with western outfits for modern style</li>\n<li>Combine with ethnic wear for traditional appeal</li>\n</ul>\n<hr>\n<h2>Care Tips for Pashmina Women Scarf</h2>\n<p>To maintain your Pashmina Women Scarf:</p>\n<ul>\n<li>Dry clean for best results</li>\n<li>Store in a soft cloth bag</li>\n<li>Avoid direct sunlight</li>\n<li>Keep away from moisture and perfumes</li>\n</ul>\n<hr>\n<h2>Conclusion</h2>\n<p>Pashmina Women Scarf is the perfect combination of luxury, comfort, and timeless fashion. Its softness, warmth, and elegant design make it an essential accessory for every woman.</p>\n<p>If you want to enhance your style with sophistication and versatility, Pashmina Women Scarf is the ideal choice.</p>",
        "topics": [
            {
                "id": 4544,
                "name": "Pashmina Women Scarf",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166341,
            "forum_user": {
                "id": 166105,
                "user": 166341,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a053613fe6f95130b8e798ec65e5832b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-01T13:44:58.436606+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "elaboreluxury",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "discover-luxury-pashmina-women-scarf-for-stylish-winter-fashion",
        "pk": 4580,
        "published": false,
        "publish_date": "2026-04-02T07:31:43.934984+02:00"
    },
    {
        "title": "Modalys 3.7 public release",
        "description": "Latest version of Modalys, Ircam's sound synthesis framework based on physical models",
        "content": "<p>Modalys 3.7, our sound synthesis framework based on physical models, one of the oldest technologies developed at Ircam, was shown at Ircam Forum in March 2022, and was available as public beta. After weeks of finetuning, it is now more than time for the official release!</p>\r\n<p><a href=\"/projects/detail/modalys/\">Modalys 3.7</a> is freely available for download.</p>\r\n<p>Here are the highlights:</p>\r\n<ul>\r\n<li>native support for Apple ARM M1 machines (&ldquo;Silicon&rdquo;)</li>\r\n<li>automatic Max package installation (Mac and Windows)</li>\r\n<li>many bug fixes, improved performances and new features (especially in lua/3D area).</li>\r\n<li>extended support for LUA (mlys.lua object for Max)</li>\r\n<li>improvements to Medit (3D mesh viewer)</li>\r\n</ul>\r\n<p>With version 3.7, Modalys is entering a new era, and for the upcoming maintenance updates, we will focus on documentation, lua API completion (to make it equal if not superior to Lisp), and new examples with an emphasis on finite elements objects (3D).</p>\r\n<p><img alt=\"\" src=\"/media/uploads/user/83a78a51823f2e1a81b0125a4ef9878d.png\" /></p>",
        "topics": [
            {
                "id": 194,
                "name": "3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 820,
                "name": "finite elements",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 818,
                "name": "physical models",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 129,
                "name": "Real time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 344,
                "name": "Real-time audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 819,
                "name": "sound synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17617,
            "forum_user": {
                "id": 17613,
                "user": 17617,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/65285f24050c7dbd54422824b1a7c7cb?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-08-31T13:33:58.886455+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 737,
                        "forum_user": 17613,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "robert_p",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "modalys-3-7-release",
        "pk": 1186,
        "published": true,
        "publish_date": "2022-07-06T14:26:21+02:00"
    },
    {
        "title": "Everything you always wanted to know about spat, but were afraid to ask - Thibaut Carpentier",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris.",
        "content": "<p>The main goal of this workshop is to answer YOUR questions about spat, panoramix &amp; co.</p>\r\n<p>No matter if you&rsquo;re a beginner or a seasoned user, come and ask your questions, show your Max patchers, discuss your projects and ideas, and we&rsquo;ll try to help you.</p>\r\n<p>There's no such thing as a stupid question. So don&rsquo;t be shy.</p>",
        "topics": [],
        "user": {
            "pk": 92,
            "forum_user": {
                "id": 92,
                "user": 92,
                "first_name": "Thibaut",
                "last_name": "Carpentier",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5200b4214a3aff548eef81f9d804ae8b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-20T10:51:45.860663+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 446,
                        "forum_user": 92,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "tcarpent",
            "first_name": "Thibaut",
            "last_name": "Carpentier",
            "bookmarks": []
        },
        "slug": "everything-you-always-wanted-to-know-about-spat-but-were-afraid-to-ask-thibaut-carpentier",
        "pk": 2157,
        "published": true,
        "publish_date": "2023-03-24T10:53:59+01:00"
    },
    {
        "title": "Roaming Silence, 2024 - Shaye Thiel",
        "description": "\"Roaming Silence\", 2024 encourage les spectateurs à mieux comprendre la perte auditive et à faire le lien avec la propre expérience de l'artiste, qui a perdu la capacité d'apprécier les sons de tous les jours. Tous les sons ont été captés par les appareils auditifs de Thiel dans sa ville natale de Tucson, en Arizona (États-Unis), agissant comme une capsule temporelle de moments intimes en plaçant le participant en contact direct avec la notion de perte.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><span></span></p>\r\n<p><br />Pr&eacute;sent&eacute; par : Shaye Thiel<br /><a href=\"https://forum.ircam.fr/profile/shayethiel/\">Biographie</a></p>\r\n<p>Shaye Thiel (she/her) est une artiste am&eacute;ricano-canadienne bas&eacute;e &agrave; Londres dont les &oelig;uvres font partie de collections priv&eacute;es internationales. En collaboration avec d'&eacute;minents neuroscientifiques, ses recherches actuelles explorent les limites de la perception du son pour les personnes malentendantes et utilisant des appareils auditifs, par le biais d'installations sonores participatives &agrave; grande &eacute;chelle.</p>\r\n<p>S'appuyant sur une exp&eacute;rience v&eacute;cue, Mme Thiel consid&egrave;re le son comme un acte participatif par le biais d'&eacute;changes performatifs sp&eacute;cifiques au site entre les participants et la machine, imitant la relation qu'elle entretient avec ses appareils auditifs. Son travail vise &agrave; faire passer le public du r&ocirc;le d'observateur &agrave; celui d'observ&eacute; par le biais d'une &eacute;coute quantique plus profonde.</p>\r\n<p>\"Roaming Silence\", 2024 encourage les spectateurs &agrave; mieux comprendre la perte auditive et &agrave; faire le lien avec la propre exp&eacute;rience de l'artiste, qui a perdu la capacit&eacute; d'appr&eacute;cier les sons de tous les jours. Tous les sons ont &eacute;t&eacute; capt&eacute;s par les appareils auditifs de Thiel dans sa ville natale de Tucson, en Arizona (&Eacute;tats-Unis), agissant comme une capsule temporelle de moments intimes en pla&ccedil;ant le participant en contact direct avec la notion de perte. Cette &oelig;uvre place le voyage du spectateur au centre de l'exposition, alors qu'il navigue dans l'espace sans m&eacute;diation ni interf&eacute;rence de la part de l'artiste, l'encourageant &agrave; contempler ses sens &agrave; travers l'&eacute;coute et le mouvement autoguid&eacute;, tout en mettant l'accent sur la fragilit&eacute; du son. 24 pistes sonores isol&eacute;es seront diffus&eacute;es simultan&eacute;ment dans la pi&egrave;ce et deviendront de plus en plus fortes &agrave; mesure que le spectateur s'approchera de chaque haut-parleur - soulignant des moments tels que le p&egrave;re de l'artiste jouant du piano, des conversations avec la famille et les amis, des r&eacute;actions avec le vent et plus encore.<br /><br /><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>\r\n<p></p>",
        "topics": [
            {
                "id": 137,
                "name": "Artist ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1812,
                "name": "art performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 524,
                "name": "Design et traitement sonores",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 815,
                "name": "soundscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32795,
            "forum_user": {
                "id": 32747,
                "user": 32795,
                "first_name": "Shaye",
                "last_name": "Thiel",
                "avatar": "https://forum.ircam.fr/media/avatars/EJ39K5ERX-WM2VB6F71-9628c3b253e4-512.png",
                "avatar_url": "/media/cache/23/19/2319eb3858dc745739b76f8a78e4d573.jpg",
                "biography": "Shaye Thiel (she/her) is a London-based American-Canadian artist with work held in private collections across the United States, United Kingdom, Canada and Germany. In collaboration with leading neuroscientists, her current research explores the boundaries of sound perception for individuals who are hard of hearing and use hearing aids.\n\nBased on lived experience, Shaye views sound as a participatory act via site-specific performative exchanges between participants and machine. Her work aims to transition the role of the public from observer to the observed via a deeper Quantum Listening.",
                "date_modified": "2024-03-21T13:00:45.936872+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "shayethiel",
            "first_name": "Shaye",
            "last_name": "Thiel",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2752,
                    "user": 32795,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "roaming-silence-2024",
        "pk": 2752,
        "published": true,
        "publish_date": "2024-02-19T11:15:48+01:00"
    },
    {
        "title": "Sympoietic Being, full dome film (Zeiss Großplanetarium 2024 and Sous Dôme Festival 2025)",
        "description": "The project investigates the integration between Spat5 and Unreal Engine 5. Developed through long-distance\r\ncollaboration (Berlin-Paris), it bridges immersive audio design, real-time visual interaction, and conceptual\r\naesthetics drawn from posthumanist theory.",
        "content": "<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<h4><strong><a href=\"https://vimeo.com/1069225302\" target=\"_blank\">Click here for full film in rectangular format and binaural decoding (redirected to vimeo.com)</a></strong></h4>\r\n<br />\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong></strong></p>\r\n<p><strong>Relational Aesthetics</strong></p>\r\n<p><span>This project coincided with a recent paper I wrote about Nicolas Bourriaud&rsquo;s Inclusions: Aesthetics of the Capitalocene, exploring issues related to posthumanism and the deconstruction of narratives inherited from the Western Renaissance. In his text, Bourriaud examines the evolving role of art amid ecological crises and proposes a \"molecular anthropology,\" focusing on the interactions between human and nonhuman entities. Simultaneously, my collaborator was investigating minerals and creating 3D projections. Both of us agreed on the concept of a crystal as the core narrative element, tracing its journey through a hyperdimensional cave and, ultimately, out into the open world. </span></p>\r\n<p><span>One of the aesthetic concepts of the piece is the use of mirrors, inspired by Robert Morris&mdash;an artist discussed in Bourriaud's book&mdash;and his minimalist mirror artworks. Observing Morris&rsquo;s pieces, particularly <em>Untitled (Williams Mirrors)</em> (1967) and <em>Strike</em> (2012), sparked the desire to explore similar visual phenomena within the Unreal Engine environment.<br /><br /><a href=\"https://vimeo.com/1022682685\" target=\"_blank\" title=\" Mirror dimension process film click here (redirected to Vimeo.com)\"><span>Mirror dimension process </span><span>fi</span><span>lm click here (redirected to Vimeo.com)</span></a></span></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>With the aforementioned aesthetic framework in mind, incorporating text into the second scene served to further solidify the conceptual grounding of the piece. Its intention was to offer the listener a space for sonic and visual contemplation, as well as conceptual engagement with the visual environment. Throughout the scene, viewers are immersed in a tesseract-like dimension, where objects re</span><span>fl</span><span>ect across mirrors, creating the illusion of in</span><span>fi</span><span>nity. In this sense, the voice makes good use of the Spat5 HOA convolution reverb tool ( </span><code><span>spat5.hoa.conv~ </span></code><span>), highlighting the idea of the apparent massiveness of the &lsquo;mirror-cave.&rsquo;</span></p>\r\n<p><br /><img src=\"/media/uploads/sympoetic-being-01.jpg\" width=\"800\" height=\"519\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"Screenshot of the film in rectangular ratio\" /></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p style=\"text-align: center;\" class=\"wys-small-text\"><em>Screenshot of the film in rectangular ratio</em></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong><br />Carving the Stone </strong></p>\r\n<p><span>The 3D environment of this project was mostly built in Unreal Engine 5, with certain frames and objects created using Blender. The piece includes several 3D scans of minerals like amethyst and quartz, alongside re</span><span>fl</span><span>ective and translucent materials made within UE5. Arthur Wardenski primarily handled the visual design and camera animation, re</span><span>fi</span><span>ning aspects like lighting, color, layout, and modeling of the cave and the outer environment of the cave. I worked on the visuals of the tesseract scene (2nd scene)&mdash;its 3D design, camera movements, and material choices (reaching for the in</span><span>fi</span><span>nity mirror by Morris). This scene played a key role in the technical development of the piece, as it provided an experimental ground for the transmission of coordinates of sound objects created in Spat5, as for level information translated in light behavior in spheres and crystals in UE5.<br /><br /></span></p>\r\n<p><span><img src=\"/media/uploads/sympoetic-being-02.jpg\" alt=\"\" width=\"800\" height=\"491\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p><span></span></p>\r\n<p><span></span></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\"><br />\r\n<p><strong>Disapearence of Distance </strong></p>\r\n<p>Given the geographical distance between collaborators, an efficient workflow was essential. With ADM files created using Spat5 and a customized version of the <code>ADM recorder</code> patch, <em>Blueprints</em> (Unreal Engine's patching environment) were developed to efficiently and dynamically assign OSC data to properties such as brightness, light color, and spatial coordinates to visual objects called <em>Actors</em>. It was crucial to precisely identify the necessary range of data and format, filtering out any additional information that might slow down the workflow and intercommunication between platforms. The filtering of data was restrained to: Peak level per sound object and Cartesian coordinates.<strong> </strong></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p><span>Given the geographical distance between collaborators, an e</span><span>ffi</span><span>cient work</span><span>fl</span><span>ow was essential. With ADM </span><span>fi</span><span>les created using Spat5 and a customized version of the </span><span>ADM recorder </span><span>patch, Blueprints (Unreal Engine's patching environment) were developed to e</span><span>ffi</span><span>ciently and dynamically assign OSC data to properties such as brightness, light color, and spatial coordinates to visual objects called Actors. It was crucial to precisely identify the necessary range of data and format, </span><span>fi</span><span>ltering out any additional information that might slow down the work</span><span>fl</span><span>ow and intercommunication between platforms. The </span><span>fi</span><span>ltering of data was restrained to: Peak level per sound object and Cartesian coordinates.</span></p>\r\n<p><span><br /><img src=\"/media/uploads/sympoetic-being-03.jpg\" alt=\"\" width=\"905\" height=\"616\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /></span></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span><br /><strong>The Sounds of the Hypercave </strong></span></p>\r\n<p><span>Initially, a multichannel (mc.) FM synthesizer was developed in MaxMSP. It was used as a tool to create (or recreate) acoustic phenomena such as sound bouncing off walls or generating early re</span><span>fl</span><span>ections through the careful placement of sound objects and delay matrices ( </span><code><span>spat5.delgen </span></code><span>) with logarithmic gain offsets. To achieve an authentic cave- like ambience, impulse responses were captured from narrow spaces around Berlin. Using a </span><span>fi</span><span>rst-order Ambisonics microphone, recordings were made in a mid-20th-century bunker in Friedrichshain and in the abandoned Soviet building in Vogelsang Zehdenick. These recordings were then upmixed to </span><span>fi</span><span>fth-order using the Sparta Ambisonics tools, enhancing spatial resolution&mdash;particularly effective for use with reverbs, in this case </span><code><span>spat5.hoa.conv~ </span></code><span>. The resulting reverberation convincingly evoked the atmosphere of a cave and was extensively employed throughout the piece (in parallel wiring), especially in processing vocals, acoustic sources, and </span><span>fi</span><span>eld recordings. Additionally, re-amped </span><span>fi</span><span>eld recordings were used, re-recorded in FOA, and upmixed to HOA as well. This process further contributed to an overall sense of depth. Granular processing of glass and cello sounds provided textural contrast against the softer backdrop of re-amped water droplets. </span></p>\r\n<p><span>For literal immersion, the underwater scene's soundscape was created using hydrophone recordings, duplicated and modulated to convey a sensation of moving beneath the ocean&rsquo;s surface. These sounds were placed in the acoustic space via sound objects within Spat5. A snapshot recording system was implemented for simultaneous automated movement across 16 sources, integrating ICST tools. </span></p>\r\n<p><span>The decoding process was more complex, but from a technical perspective, it yielded highly effective results. Generous input from Thibaut Carpentier contributed to the development of a solution tailored speci</span><span>fi</span><span>cally for both spaces (Zeiss and Cité des Sciences). This involved energy-preserving decoding (EPAD), adjusting the speaker coordinates of the dome to the listeners&rsquo; zero-elevation point, and selecting a suitable Ambisonics order for each screening.</span></p>\r\n<p><span></span></p>\r\n<p><span><img src=\"/media/uploads/sympoetic-being-04.jpg\" alt=\"\" width=\"899\" height=\"431\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p><span></span></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p style=\"text-align: center;\" class=\"wys-small-text\"><em>Zeiss HOA decoder and Cité des Sciences HOA decoder </em></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p style=\"text-align: center;\" class=\"wys-small-text\"><em><br /> </em></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>Both screenings, in Berlin and Paris, were exciting and very satisfying experiences. Naturally, minor inconsistencies emerged here and there. The most signi</span><span>fi</span><span>cant difference was between the planetariums themselves: the Paris dome offered superior visual quality but was substantially less precise in spatial resolution due to the lower resolution of its audio system. Additionally, Ambisonics generally favors listeners positioned at the sweet spot, prompting me to consider whether a different panning approach might be more suitable in future projects&mdash;such as DBAP or KNN&mdash;to reduce reliance on a single optimal listening position. </span></p>\r\n<p><span>On the other hand, the initial intention of correlating coordinates of sound objects with visual objects in virtual space also prompted some retrospective considerations. It appeared that, unless presented in a clear and simpli</span><span>fi</span><span>ed manner, this correlation didn't substantially add unique content beyond emphasizing spatial cues. However, the concept of being enclosed within a space, such as a cave, ultimately provided greater depth to the spatial storytelling, hence providing more useful materiality overall for the experience. These materials, of course, depend heavily on the resolution of the audio system and the listener&rsquo;s seating arrangement. </span></p>\r\n<p><span>Overall, using ADM </span><span>fi</span><span>les was bene</span><span>fi</span><span>cial; however, rendering audio from Spat through the </span><code><span>ADM recorder </span></code><span>required tweaking and customization to match the speci</span><span>fi</span><span>c needs of the patch. Additionally, the real-time nature of the sound-making process complicated rendering for both audio and visuals, as the ADM </span><span>fi</span><span>les needed to be played back while simultaneously capturing data with the UE5 sequencer using dynamic <em>Materials</em> and <em>Blueprints</em>. Nonetheless, the journey toward integrating UE5 proved fruitful, and the interface developed through this process provides opportunities for numerous new artistic projects and audiovisual interactive frameworks.</span></p>\r\n<br />\r\n<p><span></span></p>\r\n<p><span><img src=\"/media/uploads/sympoetic-being-05.jpg\" alt=\"\" width=\"799\" height=\"533\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p style=\"text-align: center;\" class=\"wys-small-text\"><span><em>Photo by Kathiyn Schiedt</em></span></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span><br />Project developed by Vicente Yáñez (sound and visuals) and Arthur Wardenski (design and visuals)<br /> </span></p>\r\n<p><span>This project was created with the support of Sound Studies and Sonic Arts, Universität der Künste Berlin and DSAA Design et Création Numérique, École Estienne Paris<br /><br /></span></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><a href=\"https://www.vicenteyanez.com/\" target=\"_blank\" title=\"Vicente Y&aacute;&ntilde;ez website\">Vicente Yáñez website</a></p>\r\n<p><span><a href=\"https://wardenskiart.cargo.site/\" target=\"_blank\" title=\"Arthur Wardenski website\">Arthur Wardenski website</a> </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p><span> </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 131,
                "name": "Fulldome",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 950,
                "name": "OSC ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 274,
                "name": "Soundart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 276,
                "name": "Spat 5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 283,
                "name": "Theory",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1115,
                "name": "Unreal Engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 113544,
            "forum_user": {
                "id": 113397,
                "user": 113544,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/_DSC4790.jpg",
                "avatar_url": "/media/cache/6b/6f/6b6fa9cb70583ac7100dc94162a98f36.jpg",
                "biography": null,
                "date_modified": "2025-10-02T00:03:07.665073+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "vyanezf",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3403,
                    "user": 113544,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "sympoietic-being-full-dome-film-zeiss-groplanetarium-2024-and-sous-dome-festival-2025",
        "pk": 3403,
        "published": true,
        "publish_date": "2025-04-25T14:57:59+02:00"
    },
    {
        "title": "Human play first - Musiciens organiques et séquences sans click track",
        "description": "Quels sont les obstacles à l'utilisation d'un looper pour contrôler DAW et séquences par MIDI Clock. Quels seraient les spécifications pour une interface Audio->Looper->Synchro fonctionnelles. Implications en terme de workflow en live et en création.",
        "content": "<p>J'&eacute;tais l&agrave;, en 1999, le jour o&ugrave; la rythmique a perdu la guerre contre les machines.</p>\n<p>Nous descendions, avec Sylvain, l'escalier en double h&eacute;lice d'un parking souterrain o&ugrave; un de ses potes batteur avait lou&eacute; un box, au deuxi&egrave;me sous sol, pour r&eacute;p&eacute;ter. Il &eacute;tait assez content de sa journ&eacute;e. Il bossait sur le nouveau truc qu'il fallait avoir pour trouver des plans.<br>Jouer au clic.</p>\n<p>&nbsp;</p>\n<p>Jusque l&agrave;, j'avais vu parfois le batteur avec une boite &agrave; rythme et un casque pos&eacute; &agrave; c&ocirc;t&eacute;. A la fin d'un morceau, il mettait parfois le casque et v&eacute;rifiait le tempo avant de taper les 6 coups de baguettes qui lan&ccedil;aient le premier temps du morceau suivant : 1..2.. 1 2 3 4 ...<br>Les ordis sur sc&egrave;ne, c'&eacute;tait r&eacute;serv&eacute; pour les performances &eacute;lectro, o&ugrave; le setup &eacute;tait parfois planqu&eacute; derri&egrave;re un drap. Il y avait aussi les groupes de rock alternatif qui jouaient avec une boite &agrave; rythme. C'&eacute;tait fromage ou d&eacute;ssert : batteur ou machine.<br>Je suppose qu'il y avait d&eacute;j&agrave; plein de dispositifs hyper chiad&eacute;s sur les grosses tourn&eacute;es des groupes sign&eacute;s, mais &ccedil;a c'&eacute;tait du spectacle. Nous on faisait de la musique. Pour un batteur avec un background jazz, jouer au clic &eacute;tait jusque l&agrave; une indignit&eacute;. La pulse venait des deux gars dans le fond : le batteur et le bassiste.<br>La rythmique.&nbsp;</p>\n<p>&nbsp;</p>\n<p>Sauf que vers 1999, il y a un truc qui est arriv&eacute; et qui a tout chang&eacute; : le graveur de CD dans l'ordi avec des disques achetables pour moins de 50F. Donc enfin la possibilit&eacute; de sauvegarder pour pas trop cher.&nbsp;<br>Parce que l'audionum&eacute;rique grand public, &ccedil;a datait de l'Atari Falcon, 10 ans avant. Mais &agrave; l'&eacute;poque, sauvegarder 8s de son sur une disquette, &ccedil;a prenait 5mn.<br>En 1999, on est donc pass&eacute; de petits bouts de sons dans le sampler, &agrave; n'importe quel son de n'importe quelle longueur dans l'ordi.<br>Nous sommes 25 ans plus tard, et la semaine derni&egrave;re, en swipant sur Insta, je suis tomb&eacute; sur le post d'un guitariste : un chorus lors de son concert, mais avec le son qu'il entend dans son ear monitor.<br>Et bien au dessus de son chorus, on entend tic tac tac tac tic tac tac tac.<br>Dans les commentaires j'ai lu :<br>\"Yeeeeeah the click is sick, so on time 🔥😌\"<br>En fran&ccedil;ais : Un click de malade, trop sur le temps !<br>Jouer du rock au clic est d&eacute;sormais parfaitement normal.</p>\n<p>&nbsp;</p>\n<p>Revenons en 2000, lorsque je me suis retrouv&eacute; avec mon ordi sur sc&egrave;ne, entour&eacute; de musiciens issus de la sc&egrave;ne Free Jazz.<br>Ma palette sonore comprenait la possibilit&eacute; de mettre des effets sur les instruments, diff&eacute;rentes textures sonores, et des boucles rythmiques.</p>\n<p>Lors d'une improvisation, les mati&egrave;res sonores qui apparaissent commencent g&eacute;n&eacute;ralement asynchrones, mais presque toujours, une cellule &eacute;merge, qui se r&eacute;p&egrave;te, et un rythme, une pulsation appara&icirc;t. Et je n'avais aucun moyen technique de \"rejoindre\" une pulsation si je n'en &eacute;tait pas &agrave; l'origine.<br>Autrement dit, pour pouvoir profiter de la force de la r&eacute;p&eacute;tition que permet une machine, il fallait qu'elle d&eacute;marre en premier.</p>\n<p>La cr&eacute;ativit&eacute; humaine &eacute;tant sans limite, les musiciens ou trouv&eacute; d'inombrables m&eacute;thodes pour contourner cette contrainte.</p>\n<ul>\n<li>on pouvait commencer rubato et d&eacute;marrer la machine sur un silence.</li>\n<li>on pouvait commencer sur un drone et faire &eacute;merger le rythme avec un filtre.</li>\n<li>on pouvait utiliser un looper pour utiliser la r&eacute;p&eacute;tition, mais sans sequenceur.</li>\n<li>on pouvait faire presser un rythmes sur un vinyle, et le faire jouer par un DJ.</li>\n<li>on pouvait aussi garder le style electro et enlever l'ordi...</li>\n</ul>\n<p>Une autre solution &eacute;tait la diffusion d'un metronome dans des &eacute;couteurs que le musicien porterait sur sc&egrave;ne : la click track.<br>Elle permet de lancer la machine sans que le public puisse le percevoir.&nbsp;<br>On peut alors commencer APPAREMMENT en acoustique avec la garantie que la rythmique electro rentrera juste au bon moment sur le groove.</p>\n<p>Mais &ccedil;a ne r&eacute;solvait pas mon probl&egrave;me de pouvoir improviser avec des musiciens de jazz.</p>\n<p>&nbsp;</p>\n<p>Une solution &eacute;tait d'utiliser un looper pour \"attraper\" la pulsation venant de la rythmique. Ce looper g&eacute;n&egrave;rerait du Midiclock, lequel controlerait mon s&eacute;quenceur.&nbsp;<br>C'est la m&eacute;thode qu'a utilis&eacute; le groupe Battles, en 2007, avec un d&eacute;lai con&ccedil;u en 1994 : l'Oberheim Echoplex Digital Pro, repris en 2001 par Gibson.&nbsp;<br>H&eacute;las j'ignorais l'existence de cette machine qui, accessoirement, valait ausssi cher qu'un sampler.</p>\n<p>En 2001, sont apparues les premi&egrave;res loop station, de Roland et Digitech, entre autres.&nbsp;<br>J'ai bien &eacute;videmment branch&eacute; la prise MIDI In de mon Emulator &agrave; la prise MIDI Out d'une \"Loop station\". Cela fonctionnait en &eacute;tant seul et dans des conditions optimales, mais pas en situation de jeu. On verra pourquoi plus tard.&nbsp;</p>\n<p>Tiraill&eacute; entre mon gout pour la musique electro, et mon besoin d'improvisation collective, j'&eacute;tais dans un cul de sac cr&eacute;atif.<br>C'est un coll&egrave;gue issu de l'IRCAM qui m'a parl&eacute; de Max/MSP en 2004. J'ai directement programm&eacute; un looper avec une facilit&eacute; trompeuse. J'avais mis le doigt dans l'engrenage.</p>\n<p>&nbsp;</p>\n<p>Tout en remboursant mes dettes suite &agrave; l'in&eacute;luctable banqueroute d'un projet de musique electro free jazz non subventionn&eacute;e, je travaillais sur mon patch.<br>Au fur et &agrave; mesure des essais, un certain nombre de n&eacute;cessit&eacute;s sont apparues.</p>\n<ul>\n<li>Tout d'abord, il faut pouvoir rattraper le coup si on rate sa boucle.&nbsp;<br>En effet, si c'est le d&eacute;but d'un morceau en solo, ce n'est pas un drame. on repart.&nbsp;<br>Mais si cette boucle est le fruit d'une &eacute;mergence, on ne pourra la retrouver.&nbsp;<br>Il faut donc pouvoir manipuler le point d'entr&eacute;e et de sortie de la boucle, sans arr&ecirc;ter de jouer.<br>Ensuite.&nbsp;</li>\n<li>Il faut pouvoir resynchroniser les machines esclaves du MIDIclock pour le moment o&ugrave;, quelqu'en soit la raison, elles vont se d&eacute;synchroniser. Avec la certitude que &ccedil;a va arriver. Donc un geste simple.</li>\n<li>Troisi&egrave;me point, il faut que le signal MIDIclock ne s'arr&ecirc;te jamais, sous peine de plantage de la moiti&eacute; des machines branch&eacute;es</li>\n</ul>\n<p>Il y a une autre n&eacute;cessit&eacute;, plus technique : pouvoir compenser ou g&eacute;n&eacute;rer des retards sur les flux audio et MIDI, et ce pour deux raisons :</p>\n<ul>\n<li>L'accumulation de temps de latence dans des syst&egrave;mes complexes, ou le son peut passer par plusieurs conversions ou interfaces. <br>Cette latence, acceptable quand elle reste de&ccedil;a de 10ms, peut rapidement atteindre des dizaines de ms quand on utilise du loopback et plusieurs logiciels, voire largement plus de 100ms quand on utilise en live l'entr&eacute;e audio d'un PC Windows.</li>\n<li>L'autre raison de g&eacute;n&eacute;rer un lag, une latence, c'est une question de groove. En effet, que faire si la boucle qui tourne si bien n'est pas sur le premier temps ? Qu'elle soit un peu trop en avance ou retard sur le temps, ou que le choix musical ne soit pas le premier temps, dans le cas d'un pattern de batterie syncop&eacute; par exemple.</li>\n</ul>\n<p>Les moteurs audio et MIDI furent pr&ecirc;t en 2007, et il est devenu possible pour moi de suivre d'autres personnes avec un syst&egrave;me compl&egrave;tement ouvert &agrave; l'improvisation. De tr&egrave;s nombreuses et r&eacute;guli&egrave;res sessions s'en sont suivies jusqu'&agrave; aujourd'hui.</p>\n<p>&nbsp;</p>\n<p>Le principal d&eacute;faut musical de ce genre de dispositif, est ce que je qualifierai d'accumulation non control&eacute;e. On ajoute, on ajoute, puis on a pas assez de mains pour faire des transitions radicales. Il faut donc se donner les moyens technique de diversifier la dynamique de la musique.<br>C'est un probl&egrave;me qui peut concerner &eacute;galement les groupes d'improvisation qui utilisent parfois des structures ouvertes qui permettent de se mettre d'accord en amont sur des directions ou des transitions brusque.</p>\n<p>Comme nous sommes avec un syst&egrave;me de musique electronique dont les &eacute;l&eacute;ments techniques peuvent &ecirc;tre tr&egrave;s vari&eacute;s, et &agrave; force de rajouter des sous patches Max de plus en plus compliqu&eacute;s pour pouvoir avoir ces structures, j'ai fini par programmer un s&eacute;quenceur de script inspir&eacute; des consoles lumi&egrave;re de th&eacute;&acirc;tre, c'est &agrave; dire un enchainements de GOs qui envoient des snapshots de l'&eacute;tat du looper associ&eacute; &agrave; des commandes MIDI.</p>\n<p>&nbsp;</p>\n<p>Ce patch est t&eacute;l&eacute;chargeable gratuitement sur mon site depuis 2014.<br>C'est le coeur de mon GOTO system depuis 2012, p&eacute;riode &agrave; laquelle les quadri processeurs sur portable sont apparus, avec une puisssance de calcul suffisante pour ce genre de dispositf temps r&eacute;el. Il combine les diff&eacute;rents patches Max qui constituent lagvoid, avec Ableton Live, et Cantabile, qui sert de console virtuelle, et permet d'enregistrer tous les flux audio internes et externes. La communication entre ces logiciels n&eacute;cessite des ports MIDI virtuels, une interface son bien pourvue en loopback.</p>\n<p>Mais Lagvoid est un prototype, et reste donc d&eacute;licat &agrave; mettre en oeuvre sans hardware d&eacute;di&eacute;.</p>\n<p>&nbsp;</p>\n<p>N&eacute;anmoins, cette approche a profond&eacute;ment modifi&eacute; le workflow de mes cr&eacute;ations, en particulier quand j'ai les moyens d'engager un musicien improvisateur qui joue d'un instrument organique.<br>Quelque soit le degr&eacute; de pr&eacute;paration du projet en d&eacute;but de session, celle ci se transforme en un long one shot, qui favorise les &eacute;mergences spontan&eacute;es qui m&egrave;nent &agrave; des climax, avec la certitude de pouvoir tout retravailler COMME SI on avait enregistr&eacute; au clic.&nbsp;<br>Pour cela, il suffit que le dispositif g&eacute;n&egrave;re un fichier son de la longueur exacte de la boucle g&eacute;n&eacute;r&eacute;e par l'&eacute;mergence.</p>\n<p>C'est un grand gain de temps, mais surtout la certitude d'avoir saisi des moments uniques de mani&egrave;re optimale.</p>\n<p>&nbsp;</p>\n<p>Donc, pour r&eacute;sumer \"Human Play First\", ou plut&ocirc;t les fonctions \"Lagvoid\" sont :<br>- travailler sur un syst&egrave;me bas&eacute; sur une distribution en continu de MIDIclock<br>- la source du MIDIclock est un looper<br>- on peut modifier la boucle sans interrompre le flux MIDI Clock<br>- on peut d&eacute;caler le premier temps<br>- on peut enregistrer tous les flux en live<br>- on peut enregistrer le tempo<br><br></p>\n<p>Il existe de nombreuses solutions techniques approchantes, &agrave; commencer par Ableton Live, qui inclus un looper depuis la version 8, en 2009.<br>Il y a aussi Loopy Pro, une application sur iPad, sorti en 2021, qui combine looper et lecteur de clip.<br>Puis les tr&egrave;s nombreux loopers hardware et logiciels.&nbsp;<br>J'ai lu sur les forums des exp&eacute;riences r&eacute;ussies o&ugrave; le Boss RC 600 sert d'horloge maitre.</p>\n<p>N&eacute;anmoins, il n'y a toujours pas, &agrave; ma connaissance, sur le march&eacute;, de dispositif incluant toutes ces fonctions. C'est pourquoi je publie cet article et ai commenc&eacute; &agrave; contacter des industriels pour qu'il existe enfin un g&eacute;n&eacute;rateur de MIDI Clock, non pas centr&eacute; sur la suppression du jitter, mais d&eacute;di&eacute; &agrave; g&eacute;n&eacute;rer une synchronisation efficace &agrave; partir de performances organiques.</p>\n<p>&nbsp;</p>\n<p>Fr&eacute;d&eacute;ric Malle</p>",
        "topics": [
            {
                "id": 3347,
                "name": "homme machine",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3346,
                "name": "livelooping",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3344,
                "name": "looper",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3342,
                "name": "MIDI clock",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3343,
                "name": "MIDIClock",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3345,
                "name": "synchronisation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 124956,
            "forum_user": {
                "id": 124791,
                "user": 124956,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b12499a86fa1c03685415f4ed27bc070?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-09-03T11:13:55.227038+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lagvoid",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "human-play-first-musiciens-organiques-et-sequences-sans-click-track",
        "pk": 3656,
        "published": false,
        "publish_date": "2025-09-02T11:24:31.277804+02:00"
    },
    {
        "title": "Atelier ASAP - Pierre Guillot",
        "description": "Dans cet atelier, Pierre Guillot présentera ensuite les fonctionnalités offertes par la collection de plug-ins ASAP, et en particulier les plug-ins basés sur la technologie ARA2.",
        "content": "<p><strong></strong></p>\r\n<p><strong><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></strong></p>\r\n<p><strong></strong></p>\r\n<p>Pr&eacute;sent&eacute; par : Pierre Guillot<br /><strong><a href=\"https://forum.ircam.fr/profile/guillot/\">Biographie</a></strong></p>\r\n<p><strong><br /><iframe width=\"560\" height=\"314\" style=\"display: block; margin-left: auto; margin-right: auto;\" src=\"https://www.youtube.com/embed/p2Xic7EV4mA?si=CUZBDoQPi-uzS22S\" allowfullscreen=\"allowfullscreen\"></iframe></strong></p>\r\n<p></p>\r\n<p><strong>ASAP</strong> est un <strong>ensemble de plug-ins</strong> <strong>audio</strong> qui permet de transformer le son de mani&egrave;re cr&eacute;ative. Vous &ecirc;tes invit&eacute; &agrave; jouer avec la repr&eacute;sentation du son et les param&egrave;tres de synth&egrave;se pour g&eacute;n&eacute;rer de nouveaux sons. Les plug-ins peuvent &eacute;galement &ecirc;tre utilis&eacute;s pour corriger les d&eacute;fauts du son et pour am&eacute;liorer le rendu audio. Gr&acirc;ce &agrave; l'int&eacute;gration d'ARA2, les transformations spectrales sont int&eacute;gr&eacute;es dans votre flux de travail d'&eacute;dition.</p>\r\n<p>👉<a href=\"https://forum.ircam.fr/projects/detail/asap/\"> Page du projet ASAP</a></p>\r\n<p>Dans cet atelier, Pierre Guillot pr&eacute;sentera bri&egrave;vement l'h&eacute;ritage historique et le contexte artistique et de recherche dans lequel les plug-ins ASAP ont &eacute;t&eacute; d&eacute;velopp&eacute;s, en soulignant les d&eacute;fis et la nature innovante du projet. Il pr&eacute;sentera ensuite les fonctionnalit&eacute;s offertes par la collection ASAP, et en particulier les plug-ins bas&eacute;s sur la technologie ARA2. Le plug-in Spectral Surface permet de dessiner des filtres de forme sur le spectrogramme du son et de contr&ocirc;ler leur gain et leur fondu. La repr&eacute;sentation du son et l'interface utilisateur permettent de cr&eacute;er des filtres de surface tr&egrave;s complexes et pr&eacute;cis pour r&eacute;duire ou am&eacute;liorer des parties sp&eacute;cifiques des composantes spectrales du son, compenser des artefacts g&ecirc;nants dans le son, isoler certaines sp&eacute;cificit&eacute;s du son et transformer le son de mani&egrave;re cr&eacute;ative. Le plugin Pitches Brew permet de transposer la hauteur et le formant des sons en dessinant et en modifiant leurs courbes de fr&eacute;quence. Au-del&agrave; de la qualit&eacute; exceptionnelle du traitement, le plugin offre une repr&eacute;sentation visuelle des fr&eacute;quences fondamentales originales, des hauteurs attendues et des formants avec des courbes permettant de nombreuses &eacute;ditions originales telles que le red&eacute;coupage, la transposition, l'&eacute;tirement, la copie, etc.</p>\r\n<p></p>\r\n<p>Pierre Guillot est docteur en esth&eacute;tique, sciences et technologies des arts, sp&eacute;cialis&eacute; en musique. Il a soutenu sa th&egrave;se &agrave; l'Universit&eacute; Paris 8 en 2017 dans le cadre des programmes du Laboratoire d'Excellence Arts-H2H. Tout au long de son parcours de recherche, il a particip&eacute; &agrave; la cr&eacute;ation de nombreux projets et outils pour la musique, notamment la biblioth&egrave;que de spatialisation sonore ambisonique HOA, le logiciel de patching collaboratif Kiwi, et le plugin multiformat et multiplateforme Camomile. En 2018, il rejoint l'IRCAM au sein du d&eacute;partement Innovation et Moyens de Recherche, dans lequel il est en charge de projets tels que Partiels, ASAP, et TS2.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 925,
                "name": "ASAP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 89,
                "name": "Pitch",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 202,
                "name": "Plugins",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 352,
                "name": "Time-stretch",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 123,
                "name": "Transposition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 22,
                "name": "Voice",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "asap-workshop",
        "pk": 2755,
        "published": true,
        "publish_date": "2024-02-19T14:27:35+01:00"
    },
    {
        "title": "RAVE Model Challenge - Award Ceremony",
        "description": "Join us in celebrating the winners of the RAVE Model challenge.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/images/rave_model_challenge_v5.jpeg\" /></p>\r\n<h1><strong>DESCRIPTION:</strong></h1>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/rave/\"><span>RAVE (Realtime Audio Variational autoEncoder)</span></a><span>&nbsp;is an algorithm designed for real-time, high-quality audio waveform synthesis using neural networks. It leverages a variational autoencoder (VAE) architecture, which compresses audio data into a compact latent representation, allowing efficient reconstruction of audio signals.&nbsp;</span></p>\r\n<p><span>Key features of RAVE include:&nbsp;</span></p>\r\n<ul>\r\n<li><span>Fast, high-quality audio generation: It excels at producing accurate audio in real-time, making it ideal for interactive applications (20x real-time at 48 kHz sampling rate on standard CPU)</span></li>\r\n<li><span>Real-time use: Integrated with tools like Max and Pure Data (Pd), RAVE can be used with the nn~ decoder for real-time sound generation and transformation. A&nbsp;</span><a href=\"https://forum.ircam.fr/projects/detail/rave-vst/\"><span>VST plugin</span></a><span>&nbsp;makes it easy to use in any DAW.</span></li>\r\n<li><span>Applications: Common uses include audio synthesis, timbre transformation, and style transfer.</span></li>\r\n</ul>\r\n<p><span>In short, RAVE is a powerful tool for real-time audio generation, offering both speed and quality.</span></p>\r\n<p><span>In just a few months,&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/rave/\"><span>RAVE</span></a><span>&nbsp;popularized the creation of models based on audio recordings, thanks in particular to the publication of&nbsp;</span><a href=\"https://forum.ircam.fr/article/detail/tutoriel-rave-and-nn/\"><span>a series of tutorials</span></a><span>&nbsp;and&nbsp;</span><a href=\"https://github.com/acids-ircam/RAVE\"><span>open-source code</span></a><span>. A growing and&nbsp;</span><a href=\"https://discord.gg/ygSqsj5pVH\"><span>ebullient community</span></a><span>&nbsp;of users took hold of the algorithm, and&nbsp;</span><a href=\"https://acids-ircam.github.io/rave_models_download\"><span>numerous models emerged</span></a><span>. Although these models can be quite costly to produce (around twenty GPU hours), very few have so far been published, often due to copyright issues. This challenge concerns models trained on personal recordings for which the authors own all rights.</span></p>\r\n<p><span>The aim of this challenge is to support the authors of the best models and to collectively establish a repertoire of&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/rave/\"><span>RAVE</span></a><span>&nbsp;models, enabling everyone to benefit from the richness and variety of approaches in the field of timbre/music transfer.&nbsp;</span></p>\r\n<p><span>The challenge is hosted by the&nbsp;</span><a href=\"https://dafneplus.eng.it/\">DAFNE+</a><span>&nbsp;platform, which promotes content using NFTs.&nbsp;</span></p>\r\n<p><span>A public vote awards three prizes to participants.&nbsp;</span></p>\r\n<h1><strong>PRIZE:</strong></h1>\r\n<p><span>The awards ceremony will take place during the&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><span>IRCAM Forum Workshops 2025</span></a><span>, March 28, 2025 at IRCAM, Paris.</span></p>\r\n<ul>\r\n<li><span>1st award: 2000&euro; plus one year IRCAM Forum Premium Membership</span></li>\r\n<li><span>2nd award: 1000&euro; plus one year IRCAM Forum Premium Membership</span></li>\r\n<li><span>3rd award: 500&euro; plus one year IRCAM Forum Premium Membership</span></li>\r\n</ul>\r\n<p>If multiple entries receive the same number of winning votes, their prizes and the following ones will be shared among them. For example:</p>\r\n<ul>\r\n<li>If two candidates tie for the highest score and a third has the next highest, the first two will share (2000+1000)/2 =&nbsp;<strong>&euro;1500 each</strong>, and the third will receive the&nbsp;<strong>third prize of &euro;500</strong>.</li>\r\n<li>If one candidate has the most votes (<strong>&euro;2000 first prize</strong>) and three candidates tie for the second-highest votes, their prize will be&nbsp;<strong>(1000+500)/3 = &euro;500 each</strong>.</li>\r\n</ul>\r\n<h1><strong>EVALUATION</strong></h1>\r\n<p><span>The three prizes will be awarded by vote of members registered on the&nbsp;</span><a href=\"https://dafneplus.eng.it/\"><span>DAFNE+ platform</span></a><span>&nbsp;(free registration), rewarding the three models with the highest number of votes (in descending order for the 3 prizes). The models will be published on the DAFNE+ platform Marketplace with tag &ldquo;RAVE Model Challenge&rdquo;. From February 1, 2025, members will be able to download the models to evaluate them, as well as listen to the audio files to vote for their favorite model. The link to the voting platform will be provided on February 1, 2025 and voting will close on February 28 (noon CET).&nbsp;</span></p>\r\n<h2>Links:</h2>\r\n<ul>\r\n<li>RAVE Model Challenge:&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/rave-model-challenge/\">https://forum.ircam.fr/collections/detail/rave-model-challenge/</a></li>\r\n<li>RAVE collection:&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/rave/\">https://forum.ircam.fr/collections/detail/rave/</a></li>\r\n<li>DAFNE+ Platform:&nbsp;<a href=\"https://dafneplus.eng.it\">https://dafneplus.eng.it</a></li>\r\n<li>DAFNE+&nbsp;Website:&nbsp;<a href=\"https://dafneplus.eu\">https://dafneplus.eu</a></li>\r\n<li>DAFNE+&nbsp;Discord:&nbsp;<a href=\"https://discord.gg/aR6VvV9Ttw\">https://discord.gg/aR6VvV9Ttw</a></li>\r\n<li>DAFNE+&nbsp;Survey:&nbsp;<a href=\"https://forms.gle/czcJyXhmthFkN5V48\">https://forms.gle/czcJyXhmthFkN5V48</a></li>\r\n<li>DAFNE+&nbsp;YT tutorials playlist:&nbsp;<a href=\"https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ\">https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ</a></li>\r\n<li>DAFNE+&nbsp;YT intro to Use-Case 2:&nbsp;<a href=\"https://dafneplus.eu/2024/02/interview-with-hugues-vinet-ircam-explaining-use-case-2/\">https://dafneplus.eu/2024/02/interview-with-hugues-vinet-ircam-explaining-use-case-2/</a></li>\r\n<li>DAFNE+&nbsp;Newsletter:&nbsp;<a href=\"https://dafneplus.eu/contact\">https://dafneplus.eu/contact</a></li>\r\n<li>DAFNE+&nbsp;Contact:&nbsp;<a href=\"mailto:info@dafneplus.eu\">info@dafneplus.eu</a></li>\r\n</ul>\r\n<h1><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/rave_model_challenge_banniere.png\" /></h1>",
        "topics": [
            {
                "id": 2375,
                "name": "challenge",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2376,
                "name": "model",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1745,
                "name": "nn~",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "rave-model-challenge-award-ceremony",
        "pk": 3331,
        "published": true,
        "publish_date": "2025-03-06T14:39:21+01:00"
    },
    {
        "title": "Mangrove (=KANDEL) of Memory ~ HOA Proximity Acoustic Expression through Integrated Wearable Auditory AR and Multichannel Loudspeaker Systems - Hiromichi Kitazume, Jin-Young Lee et Ken Ito",
        "description": "Expression acoustique de la proximité de l'AHO grâce à l'intégration de la RA auditive portable et de systèmes de haut-parleurs multicanaux",
        "content": "<p><strong><img src=\"/media/uploads/bandeaux_articles.png\" width=\"990\" height=\"330\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></strong>Pr&eacute;sent&eacute; par <a href=\"https://forum.ircam.fr/profile/hkzm/\">Hiromichi Kitazume</a>, <a href=\"https://forum.ircam.fr/profile/jinyoung/\">Jin-Young</a> Lee et Ken Ito</p>\r\n<p><strong>Mangrove (=KANDEL) de la m&eacute;moire</strong><br />...Expression acoustique de proximit&eacute; gr&acirc;ce &agrave; l'int&eacute;gration d'un syst&egrave;me de RA auditive portable et de haut-parleurs multicanaux</p>\r\n<p>ITOKEN + JIN-YOUNG LEE + HIROMICHI KITAZUME<br />Division de la direction d'orchestre et de la composition Universit&eacute; de Tokyo</p>\r\n<p>-</p>\r\n<p style=\"text-align: left;\">-</p>\r\n<p style=\"text-align: center;\"><strong>&alpha;</strong></p>\r\n<p>En 2017, sous la supervision du Dr Hideki Shirakawa, nous avons d&eacute;velopp&eacute; l'actionneur AR auditif ci-dessus en laminant un polym&egrave;re conducteur sur un film pi&eacute;zo&eacute;lectrique organique.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0f6da9e91087d6b1b0aaa14898afd91e.png\" /><span>&nbsp; &nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/535d6e64e0dd5464bac01239ccc82613.png\" /></p>\r\n<p>Ensuite, en 2023, en utilisant la technologie de l'IRCAM, nous avons cr&eacute;&eacute; le syst&egrave;me PAAR &lt;Proximal Auditory AR&gt;, qui envoie le son des haut-parleurs externes au casque AR portable.</p>\r\n<p>En 2017, sous la supervision du Dr Hideki Shirakawa, nous avons d&eacute;velopp&eacute; l'actionneur de RA auditive ci-dessus en laminant un polym&egrave;re conducteur sur un film pi&eacute;zo&eacute;lectrique organique.</p>\r\n<p>Puis, en 2023, en utilisant la technologie de l'IRCAM, nous avons cr&eacute;&eacute; le syst&egrave;me PAAR &lt;Proximal Auditory AR&gt;, qui envoie le son des haut-parleurs externes au casque AR portable.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5592406ea30e3702b629671b8acbca34.png\" /><span>.&nbsp; &nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/10e21b7796a636be6e5a744e3b813c29.png\" /></p>\r\n<p>En 2007, lorsque \"Gruppen (1957)\" a &eacute;t&eacute; rejou&eacute; au festival de Lucerne 50 ans apr&egrave;s sa cr&eacute;ation, nous avons eu l'occasion de soutenir Stockhausen de K&uuml;rten, Pierre Boulez et Peter E&ouml;tv&ouml;s en tant qu'agent de liaison.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1d52fd4322c0c6f800b03de52b15dd6e.png\" /></p>\r\n<p>En 1958, Boulez a compos&eacute; Po&eacute;sie pour pouvoir en r&eacute;ponse &agrave; \"Gruppen\", qui &eacute;tait novateur en termes de spatialit&eacute; dans la musique, mais Boulez n'&eacute;tait pas satisfait du r&eacute;sultat. Finalement, Boulez a r&eacute;solu ce probl&egrave;me en fondant l'IRCAM, en d&eacute;veloppant la technologie de base et en achevant ses R&eacute;pons (1981), mais CETTE SPATIALIT&Eacute; du son est toujours rest&eacute;e ENTRE LES PARLEURS : Loin des oreilles des auditeurs. NOTRE &OElig;UVRE est la premi&egrave;re invention musicale dans laquelle le son est DONN&Eacute; aux OREILLES de l'auditeur &agrave; partir de haut-parleurs distants, DIRECTEMENT.</p>\r\n<p>-</p>\r\n<p style=\"text-align: center;\"><strong>&beta;</strong></p>\r\n<p>Outre la question de la spatialit&eacute;, l'un des sujets que Boulez a cherch&eacute; &agrave; approfondir dans Po&eacute;sie pour pouvoir &eacute;tait l'expansion du probl&egrave;me du \"Sprechgesang\" de Schoenberg, qui pose la question de la diff&eacute;rence entre le chant et la parole. Boulez a tent&eacute; d'&eacute;tendre ses efforts dans Le marteau sans ma&icirc;tre (1953-55) &agrave; la musique &eacute;lectroacoustique, avec l'id&eacute;e de traiter le spectre du timbre instrumental et le formant vocal humain comme un continuum/discr&eacute;tion. Cependant, non seulement la technologie analogique des ann&eacute;es 1950, mais aussi les premi&egrave;res technologies num&eacute;riques de 1981, n'ont pas pu combler ces profondes lacunes entre le timbre, la voix et la parole.</p>\r\n<p>En 1995, nous avons commenc&eacute; &agrave; collaborer avec Pierre Boulez &agrave; Tokyo, mais auparavant, en 1987, Luigi Nono (1924-90), qui n'a visit&eacute; Tokyo qu'une seule fois au cours de sa vie, nous a interrog&eacute;s sur le probl&egrave;me du \"Sprechgesang\" et de la \"Klangfarbenmelodie\" ; Entre 1997 et 1998, nous avons partiellement r&eacute;solu ce probl&egrave;me en appliquant la d&eacute;composition sinuso&iuml;dale, qui divise le langage parl&eacute; en minuscules sinuso&iuml;des, selon le m&eacute;canisme du syst&egrave;me auditif humain, en particulier les cellules cili&eacute;es de la cochl&eacute;e. Nous avons eu l'occasion de discuter bri&egrave;vement de ce sujet avec G&eacute;rard Grisey au Th&eacute;&acirc;tre du Nord, mais malheureusement, peu de temps apr&egrave;s, il est d&eacute;c&eacute;d&eacute; subitement, et 25 ans se sont rapidement &eacute;coul&eacute;s.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/eefc6040968ba286caed6a08b5cca333.png\" /><span><span>&nbsp;</span>&nbsp;&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2acdea5bbafa04aeeea75aead858a9b0.png\" /><span><span>&nbsp;</span>&nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c7b619e6d58e349e13f8ff87943faeb0.png\" /><span><span>&nbsp;</span>&nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/65a80c3853dd9a74a789617045b9b021.png\" /></p>\r\n<p>Dans notre travail, le document vocal du po&egrave;te Frank Diamand (1939-), un survivant du camp de concentration de Bergen-Belsen, r&eacute;citant sa propre po&eacute;sie, est d&eacute;chir&eacute; comme des fils de fragments sinuso&iuml;daux.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/fdae559d3fd46a91579c199d57ce9e27.png\" /><span><span>&nbsp;</span>&nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3bef24b5a2dce8390650629e5c3f4f4e.png\" /><span><span>&nbsp;</span>&nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/66cfef89c32f8278da19fbefac291d30.png\" /><span><span>&nbsp;</span>&nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3155a45689607b7f76a6568164280e18.png\" /><span>&nbsp;</span></p>\r\n<p>En suivant ce processus &agrave; l'envers, en tant qu'hyst&eacute;r&eacute;sis de la cognition c&eacute;r&eacute;brale, il est possible de reconstruire une image sonore linguistique &agrave; partir de sinuso&iuml;des, et m&ecirc;me d'un timbre semblable &agrave; celui d'un instrument.</p>\r\n<p>-</p>\r\n<p style=\"text-align: center;\"><strong>&gamma;</strong></p>\r\n<p>En 1987, lors de sa visite unique au Japon, Luigi Nono a organis&eacute; un s&eacute;minaire &agrave; l'universit&eacute; de Tokyo sur son \"Prometeo - tragedia dell'ascolto\" (1981-85). Il a mentionn&eacute; la \"spatialit&eacute;\" comme un attribut de la musique qui n'est \"ni la hauteur ni le rythme\". L'id&eacute;e de positionner la \"spatialit&eacute;\" comme une extension du \"timbre\" est &eacute;voqu&eacute;e. La d&eacute;finition originale de Schoenberg de la \"Klangfarbenmelodie\" est \"les changements de timbre\" eux-m&ecirc;mes pourraient &ecirc;tre con&ccedil;us comme des \"m&eacute;lodies\". Dans cette vision, le langage parl&eacute; pourrait &eacute;galement &ecirc;tre une sorte de \"changements de timbres\" et le probl&egrave;me du \"Sprechgesang\" peut &eacute;galement &ecirc;tre trait&eacute; comme une partie de la \"Klangfarbenmelodie\", tout comme la \"spatialit&eacute;\" en tant qu'extension sp&eacute;ciale du concept de timbre.<br />Lorsque nous additionnons les \"parties sinuso&iuml;dales\" d'un son linguistique, de simples fragments se composent de changements de timbre complexes, et finalement le sens de la langue parl&eacute;e devient audible.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b3d97f46c70a123278bb561f44c4cdd9.png\" /><span><span>&nbsp;</span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8c904d460390cf6b0db28fc44f37b40f.png\" /><span><span>&nbsp;</span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/032fc262968bb18159211b52dddd7e69.png\" /><span><span>&nbsp;</span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/67fbd3a224db3e0e768b1cde57f0beba.png\" /></p>\r\n<p><strong>1 morceau de sinuso&iuml;de.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2 morceaux de sinuso&iuml;de.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 3 morceaux de sinuso&iuml;de&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Signification verbale audible</strong></p>\r\n<p>Si nous appliquons une op&eacute;ration similaire aux timbres instrumentaux, en ajoutant ou en supprimant des spectres partiels sp&eacute;cifiques, m&ecirc;me si nous n'utilisons que des instruments - sans syst&egrave;me &eacute;lectronique - le sens de la langue devient audible. <br />seulement avec des instruments --- sans syst&egrave;me &eacute;lectronique - peut rendre les choses diff&eacute;rentes.<br />La cr&eacute;ation de nouveaux doigt&eacute;s d'instruments changerait compl&egrave;tement l'exp&eacute;rience musicale.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9e16033c0d1e074b458ad47949a1aa2d.png\" /><span>&nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/68baf543506a61ce594292fb0f18d938.png\" /><span>&nbsp;<span>&nbsp;</span></span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0a3539ebeddd2f35937ea666afd5d493.png\" /><span><span>&nbsp;</span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f4ce621ee69624d900bf96c8ba322a28.png\" /></p>\r\n<p><strong>Clarinette basse longtone&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Impulsions&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Trille double couleur&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Trille triple couleur</strong></p>\r\n<p>Dans ce travail, en ajoutant ou en supprimant des spectres de mani&egrave;re instrumentale ou &eacute;lectronique, nous nous sommes pr&eacute;par&eacute;s &agrave; la situation dans laquelle les changements et le flottement spatial du timbre cr&eacute;ent eux-m&ecirc;mes des rythmes et des motifs m&eacute;lodiques.</p>\r\n<p>-</p>\r\n<p style=\"text-align: center;\"><strong>&Omega;</strong></p>\r\n<p>Notre ami Frank Diamond (1939-), po&egrave;te et r&eacute;alisateur juif n&eacute;erlandais, est un survivant de l'Holocauste qui a &eacute;t&eacute; lib&eacute;r&eacute; du camp de concentration de Bergen-Belsen alors qu'il &eacute;tait &acirc;g&eacute; de 6 ans. Voici des extraits de son po&egrave;me.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9e16033c0d1e074b458ad47949a1aa2d.png\" />&nbsp;<span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/68baf543506a61ce594292fb0f18d938.png\" />&nbsp;<span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0a3539ebeddd2f35937ea666afd5d493.png\" /><span>&nbsp;</span>&nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f4ce621ee69624d900bf96c8ba322a28.png\" /></p>\r\n<p><strong>&nbsp; &nbsp;Bass clarinet longtone&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Pulses&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Double color trill&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Triple color trill</strong></p>\r\n<p>In this work, by adding or removing spectra either instrumentally or electronically, we prepared for the situation in which the changings and spatially floating timbre themselves create rhythm and melodic motives.</p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: center;\"><strong>&Omega;</strong></p>\r\n<p style=\"text-align: center;\">Our friend Frank Diamond (1939-), a Jewish Dutch poet and film director, is a Holocaust survivor who was liberated from the Bergen-Belsen concentration camp as a 6-year-old boy. Below are excerpts from his poem.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9295416c82765b5e7311ee13dd9b68be.png\" />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<span>&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/ed91c3ff07447d7416a01d1ad5fa5e09.png\" /></p>\r\n<p><strong>\"Je cherchais</strong><br /><strong>la dame de l'oubli</strong><br /><strong>mais elle n'&eacute;tait pas l&agrave;.</strong><br /><strong>J'avais besoin d'elle comme de la mort,</strong><br /><strong>mon compl&eacute;ment.</strong><br /><strong>Alors, nous nous reverrons quand je serai vieux.</strong><br /><strong>Tu suceras mon ut&eacute;rus, et tu br&ucirc;leras mon cerveau.\"</strong></p>\r\n<p>\"Mangrove=KANDEL de memoire\" est compos&eacute; de trois couches sonores. La premi&egrave;re couche est un \"<strong>monde ext&eacute;rieur</strong>\" caract&eacute;ris&eacute; par la pulsation d'une clarinette basse r&eacute;verb&eacute;r&eacute;e &agrave; l'int&eacute;rieur de la pi&egrave;ce, dans laquelle seules quelques langues parl&eacute;es sont audibles. La deuxi&egrave;me couche est constitu&eacute;e des \"<strong>voix des passeurs de fronti&egrave;res</strong>\" qui sautent dans les oreilles de l'auditeur et s'en &eacute;chappent lorsqu'il porte un casque de r&eacute;alit&eacute; augment&eacute;e auditive. La troisi&egrave;me couche est celle des \"<strong>voix cach&eacute;es</strong>\", lues par Frank lui-m&ecirc;me, qui ne peuvent transmettre un sens verbal que par l'interm&eacute;diaire du casque AR.</p>\r\n<p>Une fois cette \"voix cach&eacute;e\" entendue, les auditeurs r&eacute;alisent que les voix des victimes sont omnipr&eacute;sentes dans les &eacute;chos externes qui &eacute;taient auparavant consid&eacute;r&eacute;s comme insignifiants. Cette \"hyst&eacute;r&eacute;sis\" de la m&eacute;moire a &eacute;t&eacute; con&ccedil;ue d'apr&egrave;s les pens&eacute;es d'un historien et psychanalyste d'origine, qui a quitt&eacute; Vienne &agrave; l'&acirc;ge de 9 ans lors de la \"Nuit de Cristal\" (1938). Il a &eacute;t&eacute; &eacute;vacu&eacute; aux &Eacute;tats-Unis et plus tard connu comme neurophysiologiste Nobel de la m&eacute;moire, Eric Kandel (1929-), qui a r&eacute;ussi &agrave; retracer la \"suppression de la m&eacute;moire\" par le biais du processus de la science mol&eacute;culaire.</p>\r\n<p>Ce travail est d&eacute;di&eacute; &agrave; Frank, &agrave; Mme et au Dr Kandel.</p>\r\n<p>-</p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></p>",
        "topics": [
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1910,
                "name": "HOA",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1911,
                "name": "Wearable Auditory AR",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4222,
            "forum_user": {
                "id": 4220,
                "user": 4222,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Photo_50per_trim2.jpg",
                "avatar_url": "/media/cache/0f/1e/0f1e74a7d36f85ffc3b3310f1fa28537.jpg",
                "biography": "Hiromichi Kitazume (b.1987 in Tokyo, Japan) studied music composition with Stefano Gervasoni and electroacoustic and mixed music with Luis Naón, Yan Maresz, Tom Mays and Oriol Saladrigues at the Conservatoire de Paris, and follows the Cursus of IRCAM. Previously, he received his Master’s Degree in music composition from Tokyo National University of Fine Arts and Music (Geidai), where his principal teacher was Ichiro Nodaira. In addition, he studied conducting at Tôhô Gakuen School of Music with Ken Takaseki.\n\nAs a composer, Kitazume has worked on commissions that have come from many performing organizations and musicians. His works were also heard in such festivals as National Arts Festival (Japan), Akiyoshidai Summer Music Festival (Japan), Chigiana International Festival (Italy), Journées Nationales de la Musique Electroacoustique (France), Setouchi Triennale (Japan), Manifeste (France) and in numerous concerts in Japan and in Europe.\n\nAs a conductor Kitazume is in great demand especially for the contemporary repertoire. He has conducted numerous pieces for orchestra, brass band, ensemble and choir, more than 30 (until 2013) of which are World Premieres or Japan Premieres.",
                "date_modified": "2026-01-01T12:00:53.509263+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 756,
                        "forum_user": 4220,
                        "date_start": "2025-12-25",
                        "date_end": "2026-12-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 368,
                                "membership": 756
                            },
                            {
                                "id": 992,
                                "membership": 756
                            },
                            {
                                "id": 993,
                                "membership": 756
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "hkzm",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "hoa-proximity-acoustic-expression-through-integrated-wearable-auditory-ar-and-multichannel-loudspeaker-systems",
        "pk": 2840,
        "published": false,
        "publish_date": "2024-03-26T11:33:01+01:00"
    },
    {
        "title": "Présentation de Tone Free - Nouveau plugin ASAP",
        "description": "Tone Free vous permet de modifier la hauteur et les formants d'un son en temps réel.",
        "content": "<p>Tone Free vous permet de modifier la hauteur et les formants d'un son en temps r&eacute;el. La transposition de hauteur pr&eacute;serve les r&eacute;sonances originales du son, contrairement &agrave; la transposition de formants, qui d&eacute;place les r&eacute;sonances &agrave; travers le spectre. En pr&eacute;servant certaines des r&eacute;sonances du son (g&eacute;n&eacute;ralement &agrave; raison de 25 %), vous conservez le caract&egrave;re naturel du son original. &Agrave; l'inverse, le d&eacute;placement des r&eacute;sonances g&eacute;n&egrave;re des effets de filtrage qui peuvent s'av&eacute;rer int&eacute;ressants pour des approches cr&eacute;atives.</p>\r\n<p>👉 Ce plugin fait &eacute;galement partie de l'offre&nbsp;<a href=\"https://testing.forum.ircam.fr/shop/en/asap/full-bundle\">ASAP Full Bundle</a><span><span>&nbsp;</span></span>et de l'abonnement<span><span>&nbsp;</span></span><a href=\"https://forum.ircam.fr/shop/en/premium\">IRCAM Forum Premium</a><span><span>.</span></span></p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/user/af82d40a3cda1022f6b8a62631b365d2.png\" /></p>\r\n<p>### System Requirements<br />- MacOS 10.15 and higher (64bit - Universal Intel/Silicon)<br />- Linux (64 bit)<br />- Windows 10 and 11 (64 bit).</p>\r\n<p>### Plugins Format<br />The plugin is available in the VST3, Audio Unit, AAX formats.</p>\r\n<p></p>",
        "topics": [
            {
                "id": 925,
                "name": "ASAP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 89,
                "name": "Pitch",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 210,
                "name": "SuperVP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 123,
                "name": "Transposition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "introducting-tone-free-new-asap-plugin",
        "pk": 4352,
        "published": false,
        "publish_date": "2026-02-13T13:28:39+01:00"
    },
    {
        "title": "Sonic Psychogeography: reimagining urban experience by Jett Ilagan (Taiwan / Philippines)",
        "description": "A research-based project using game engine to create a 3D virtual environment shaped through the intersection psychogeography, sound, and memory. Users drift through ambient textures, spatial audio, and migrant narratives, guided not by visuals, but by sound.",
        "content": "<p></p>\r\n<p style=\"text-align: left;\"><strong>Memory plays a crucial role in the context of migration as it provides continuity to the dislocations of individuals. Among the array of stimuli influencing the process of memory formation, <span style=\"text-decoration: underline;\">sound emerges as an equally significant factor.</span> For migrants, particularly those who have relocated to a new country or environment, sound triggers memories associated with their homeland, culture, or past experiences.</strong><br /><br />This presentation introduces a research-based project that explores how sonic psychogeography and memory can be translated into a virtual environment. Currently in development, <strong>sonic_{imprints}&deg; </strong><em>(working title)</em>&nbsp;constructs a speculative 3D space guided not by visual objectives or gameplay mechanics, but by sound &mdash; offering users an immersive drift through ambient textures, spatial audio cues, and migrant narratives embedded in the city.</p>\r\n<p style=\"padding-left: 40px;\">&nbsp;</p>\r\n<p style=\"padding-left: 120px;\">sonic_{imprints}<strong>&deg;</strong> engages with the lived sonic realities of Filipino migrant workers in Taipei, examining how sound operates as a carrier of memory, cultural presence, and spatial belonging. Drawing on field recordings and psychogeographic methods such as d&eacute;rive, the project constructs an audio-centered world in which users navigate through zones of tension, nostalgia, and resonance. Unlike conventional game design that prioritizes visual feedback, this space invites a different kind of interaction: one where listening becomes the primary mode of engagement.<br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/bc31caf3e929d79edf7232b44ed08173.png\" /></p>\r\n<p style=\"padding-left: 120px;\"><em><sub>The work investigates how urban soundscapes can be recontextualized to foreground embodied and affective modes of exploration.</sub></em></p>\r\n<p style=\"padding-left: 280px;\">&nbsp;</p>\r\n<p style=\"padding-left: 240px;\"><em><sub><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c4c1f14b56d10caa03f736cbcb19a0e7.jpg\" style=\"float: right; padding-left: 20px;\" /></sub></em></p>\r\n<p style=\"padding-left: 80px; text-align: left;\"><em><sub>The project positions virtual space not as a neutral digital canvas, but as an emotionally charged environment shaped by memory and movement.</sub></em></p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p style=\"text-align: left;\">Conceptually, sonic_{imprints}<strong>&deg;</strong> draws from theories of acoustic ecology, sonic agency, and migratory listening. The research also reflects on psychogeography as a method of mapping urban experience through affect and sound, rather than through visual or cartographic logic. Listening, here, becomes a way of sensing the city &mdash; one shaped by labor, distance, and cultural displacement.<br /><br /></p>\r\n<p style=\"text-align: left;\">Though still in its prototyping phase, the presentation will reflect on early findings, aesthetic strategies, and technical methods used in the process. It will also address how immersive audio and 3D environments can serve as tools for reimagining urban experience, particularly from marginalized or diasporic perspectives.<br /><br /><br /></p>\r\n<div>&nbsp;</div>\r\n<hr />\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: left;\"><em>Jett Ilagan (b. 1991), a.k.a. escuri, is an interdisciplinary artist, and a cultural worker based in the Philippines and Taiwan. His body of works, which include installation, video art, and audio-visual performance, explore environmental sounds, particularly the idea of <strong>&ldquo;cultural soundscapes,&rdquo; </strong>through immersion-based methods such as psychogeography, sound walking, holding community workshops, and personal encounters with the subject environment and its locals.</em></p>\r\n<p style=\"text-align: left;\"><em>Ilagan investigates spatial soundscapes, intending to encourage people to question and reflect on their relationship with the environment in this period, during which human activities have dominated rural &amp; urban ecology and generative spaces.&nbsp;</em></p>\r\n<p style=\"text-align: left;\"><em>For the past years, he has exhibited, conducted art projects, and participated in artist residencies in the Philippines, USA, Germany, Italy, Malaysia, Singapore, Korea, Japan, Vietnam, and Taiwan.</em></p>\r\n<p style=\"text-align: left;\"><em>Currently, he is enrolled at the Taipei National University of the Arts, where he is taking an International Masters Program in Studies of Arts and Creative Industries, focusing on Interdisciplinary Art (2024). He holds a Diploma in Digital Arts &amp; Design (2010) and a Bachelor's Degree in Communication Major in Multimedia Arts (2014) at Mapua Malayan Colleges Laguna.&nbsp;</em></p>\r\n<p></p>",
        "topics": [
            {
                "id": 194,
                "name": "3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 910,
                "name": "field recordings",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3593,
                "name": "game engine",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3592,
                "name": "installation art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3591,
                "name": "psychogeography",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 133534,
            "forum_user": {
                "id": 133359,
                "user": 133534,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Artist_Portrait_Jett_Ilagan.jpg",
                "avatar_url": "/media/cache/0e/1a/0e1ab4d0fbbe7e209330b4beb8573c7b.jpg",
                "biography": "Jett Ilagan (b. 1991), a.k.a. escuri, is an interdisciplinary artist, and a cultural worker based in Laguna, Philippines.\n\nHis body of works, which include installation, video art,\nand audio-visual performance, explore environmental sounds, particularly the idea of “cultural soundscapes,” through immersion-based methods such as psychogeography, sound walking, holding community workshops, and personal encounters with the subject environment and its locals.\n\nIlagan investigates spatial soundscapes, intending to encourage people to question and reflect on their relationship with the environment in this period, during which human activities have dominated rural & urban ecology and generative spaces.\n\nFor the past years, he has exhibited, conducted art projects, and participated in artist residencies in the Philippines, USA, Germany, Italy, Malaysia, Singapore, Korea, Japan, Vietnam, and Taiwan. He is part of BuwanBuwan Collective, a Manila based record label dedicated to unearthing substantial electronic art forms. He has also co-founded PsWs, an independent artist-run initiative based in the province of Laguna.",
                "date_modified": "2025-11-06T01:16:48.889268+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jettilagan",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sonic-psychogeography",
        "pk": 3900,
        "published": true,
        "publish_date": "2025-10-27T10:35:58+01:00"
    },
    {
        "title": "Tweak de la semaine (W26)",
        "description": "Piano, xylophone, batterie et basse conspirent pour vous faire taper du pied !",
        "content": "<div style=\"position: relative; padding-bottom: 80%; height: 0px; border-radius: 10px; overflow: hidden;\"><iframe width=\"300\" height=\"150\" style=\"border: none; position: absolute; top: 0px; left: 0px; width: 100%; height: 100%;\" src=\"https://tweakable.org/embed/examples/seqmult_v67\" frameborder=\"0\"></iframe></div>\r\n<h4 id=\"create-your-own-tweakables-at-tweakable-org\" style=\"position: relative; padding-bottom: 65%; height: 0px; border-radius: 10px; overflow: hidden;\">Cr&eacute;ez votre propre Tweak sur&nbsp;<a href=\"http://tweakable.org/\">tweakable.org</a>.</h4>",
        "topics": [
            {
                "id": 428,
                "name": "Algorithmic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 206,
                "name": "Interactive real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 426,
                "name": "Tweakable",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 427,
                "name": "Tweakoftheweek",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18424,
            "forum_user": {
                "id": 18417,
                "user": 18424,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d36f7c122c36bf714b376ed2c132c929?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jwvsys",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tweak-of-the-week-w26",
        "pk": 710,
        "published": true,
        "publish_date": "2020-06-22T13:40:21+02:00"
    },
    {
        "title": "Hearing from Within a Crossfade by lewis Wolstanholme",
        "description": "This installation explores spatialised deconstructions of timbre using the Joint Time-Frequency Scattering transform for neural audio synthesis. This spatialisation process creates an atmospheric sonic landscape which highlights the modulatory and transitory characteristics of a sound at distinct locations within a space.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\">&nbsp;<label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><img src=\"/media/uploads/installation.png\" alt=\"\" width=\"1000\" height=\"789\" /></p>\r\n<p>Presented by : Lewis Wolstanholme</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/lwolstanholme/\" target=\"_blank\">Biography</a></p>\r\n<div></div>\r\n<div>This installation explores spatialised deconstructions of timbre, and creates immersive transitions between different sonic materials using the Joint Time-Frequency Scattering transform (JTFS). The JTFS transform produces a multi-dimensional representation of audio by analysing and disentangling&nbsp; the spectrotemporal modulations present within sound, such as frequency modulations,amplitude modulations, and pitch. The JTFS transform has been shown to closely model how our brain interprets modulatory changes in sonic materials due to the relationship between the wavelets employed by the JTFS and the neurophysiology of the auditory cortex. Using this technique, it is possible to design an iterative resynthesis algorithm, utilising machine learning techniques and gradient descent, which can be used to distort, crossfade, and reshape the form of musical and sonic materials during composition.&nbsp;</div>\r\n<div></div>\r\n<div>For this work, various recordings and musical fragments have been stitched together to create an immersive, textural and seamlessly evolving sonic palette. To achieve this, the JTFS resynthesis technique has been utilised to artificially create long-form passages of audio that demonstrate a transition from one sonic fragment to another. By rendering the audio produced at every step during the gradient descent process, it is possible to portray the inner workings of this resynthesis technique, and create passages of audio that emphasise the transitory process from one source material to another. The products of this resynthesis technique are then spatialised relative to their underlying spectrotemporal modulations and pitch. This spatialisation process creates an atmospheric sonic landscape which highlights the modulatory characteristics of a sound at distinct locations within a space.</div>\r\n<div></div>\r\n<div>This work has been produced in collaboration with the technologist Christopher Mitcheltree, who has been developing a new approach towards employing the JTFS transform during the creative process. Christopher is a PhD researcher at the Centre for Digital Music at Queen Mary University of London, and is also a founding engineer of Neutone: a neural audio plugin, open-source SDK, and community that helps bridge the gap between audio researchers and artists. This work also builds upon many of the techniques originally presented in the 2023 AES paper &lsquo;Hearing from Within a Sound&rsquo;.</div>\r\n<p></p>",
        "topics": [
            {
                "id": 2341,
                "name": "immersive audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1707,
                "name": "installation sonore",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1774,
                "name": "neural synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 276,
                "name": "Spat 5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 63372,
            "forum_user": {
                "id": 63305,
                "user": 63372,
                "first_name": "Lewis",
                "last_name": "Wolstanholme",
                "avatar": "https://forum.ircam.fr/media/avatars/lew_copy.png",
                "avatar_url": "/media/cache/ea/41/ea4188a68ef3976ec33131bb98af5bd8.jpg",
                "biography": "Lewis is a multidisciplinary artist and musician based in London. His practice centres around contemporary approaches to composition, installation and performance, utilising his technological capabilities to develop new techniques towards exploratory sound design and experimental performativity. Lewis is currently a part-time PhD researcher in Electronic Engineering & Computer Science at the Centre for Digital Music, Queen Mary University of London. Prior to this, he received his BMus and MMus in composition from Goldsmiths, University of London. Now working as part of the Augmented Instruments Lab alongside Andrew McPherson, Lewis' research is centred upon material fictions - the process of developing a compositional and computational narrative using an interplay of material perspectives.\n\nLewis works as a freelancer across multiple artistic and specialist disciplines. He has assisted artists in a variety of settings, most prominently providing musical, technical and audio engineering support to artists during performance and production.",
                "date_modified": "2026-02-12T11:20:11.743357+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lwolstanholme",
            "first_name": "Lewis",
            "last_name": "Wolstanholme",
            "bookmarks": []
        },
        "slug": "hearing-from-within-a-crossfade",
        "pk": 3303,
        "published": true,
        "publish_date": "2025-02-21T14:03:06+01:00"
    },
    {
        "title": "Dicy2 : composer des interactions musicales avec des agents génératifs - Jerome Nika",
        "description": "Dicy2 est à la fois un package pour Max et un plugin pour Ableton Live mettant en œuvre des agents interactifs utilisant l'apprentissage automatique pour générer des séquences musicales qui peuvent être intégrées dans des situations musicales allant de la production de matériel structuré dans un processus de composition à la conception d'agents autonomes pour l'interaction improvisée.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute;&nbsp;par : Jerome Nika<br /><a href=\"https://forum.ircam.fr/profile/jnika/\">Biographie</a></p>\r\n<p><br />Dicy2 est &agrave; la fois un package pour Max et un plugin pour Ableton Live mettant en &oelig;uvre des agents interactifs utilisant l'apprentissage automatique pour g&eacute;n&eacute;rer des s&eacute;quences musicales qui peuvent &ecirc;tre int&eacute;gr&eacute;es dans des situations musicales allant de la production de mat&eacute;riel structur&eacute; dans un processus de composition &agrave; la conception d'agents autonomes pour l'interaction improvis&eacute;e.</p>\r\n<p><a href=\"https://www.youtube.com/watch?v=xt8-rlqMIQM&amp;list=PL-C_JLZNFAGco5OO3loQkBRIiNrs0tCkt\">Playlist IRCAM Videos Tutoriels : Agents musicaux g&eacute;n&eacute;ratifs Dicy2</a></p>\r\n<p>Dicy2 int&egrave;gre les r&eacute;sultats de recherches scientifiques et musicales accumul&eacute;es au cours de productions et d'exp&eacute;rimentations avec R&eacute;mi Fox, Steve Lehman, l'Orchestre National de Jazz, Alexandros Markeas, Pascal Dusapin, Le Fresnoy - Studio National des Arts Contemporains, Vir Andres Hera, Ga&euml;tan Robillard, Beno&icirc;t Delbecq, Jozef Dumoulin, Ashley Slater, Herv&eacute; Sellin, Rodolphe Burger, Marta Gentilucci... Apr&egrave;s avoir fait &eacute;voluer des prototypes de recherche cristallisant les apports de ces diff&eacute;rents projets pendant plusieurs ann&eacute;es, un travail collaboratif men&eacute; au cours de l'ann&eacute;e 2022 a abouti &agrave; la finalisation d'une version de Dicy2 sous forme de plugin pour Ableton Live et de biblioth&egrave;que pour Max.</p>\r\n<p></p>\r\n<p>Dicy2 est une biblioth&egrave;que pour Max et un dispositif pour Ableton Live con&ccedil;us et d&eacute;velopp&eacute;s par J&eacute;r&ocirc;me Nika, Augustin Muller, Joakim Borg, et Matthew Ostrowski pour l'&eacute;quipe Repr&eacute;sentations musicales de l'Ircam dans le cadre des projets ANR-DYCI2, ANR-MERCI, ERC-REACH dirig&eacute;s par G&eacute;rard Assayag, et du projet UPI-CompAI Ircam. Les cas d'utilisation audio ont &eacute;t&eacute; con&ccedil;us et d&eacute;velopp&eacute;s avec Diemo Schwarz et Riccardo Borghesi, et utilisent les environnements MuBu et CataRT de l'&eacute;quipe ISMM de l'Ircam. Plugin Max4Live par Manuel Poletti. Contributions / remerciements : Serge Lemouton, Jean Bresson, Thibaut Carpentier, Georges Bloch, Mikha&iuml;l Malt, Axel Chemla--Romeu-Santos, Tristan Carsault, Vincent Cusson, Tommy Davis, Dionysios Papanicolaou, Greg Beller, Markus Noisternig.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 175,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 297,
                "name": "Electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1651,
                "name": "Improvisation, générativité et interactions co-créatives",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 212,
                "name": "Real-Time Audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1647,
                "name": "Technologies Ircam Free",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1646,
                "name": "Technologies Ircam Premium",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18367,
            "forum_user": {
                "id": 18360,
                "user": 18367,
                "first_name": "Jerome",
                "last_name": "Nika",
                "avatar": "https://forum.ircam.fr/media/avatars/jerome_nika-466x233.jpg",
                "avatar_url": "/media/cache/f2/20/f220de2bc73567220b06bd17faf4baa1.jpg",
                "biography": "As a researcher at Ircam, Jérôme Nika’s work focuses on how to model, learn, and navigate an “artificial musical memory” in creative contexts. In opposition to a “replacement approach” where AI would substitute for human, this research aims at designing novel creative practices involving a certain level of symbolic abstraction such as “interpreting / improvising the intentions” and “composing the narration“. \nNumerous productions have the resulting technologies: Roulette, NYC; Onassis Center, Athens; Ars Electronica Festival, Linz; Frankfurter Positionen festival; Annenberg Center, Philadelphia; Bimhuis, Amsterdam; French embassy Washington DC; Maison de la Radio, Centre Pompidou, Collège de France, LeCentquatre, Paris; Montreux Jazz Festival; Montreal Jazz Festival etc.\nAs a musician, computer music designer, or scientific advisor, he is involved in numerous musical productions and artistic collaborations, particularly in improvised music (Steve Lehman, Orchestre National de Jazz, Bernard Lubat, Benoît Delbecq, Rémi Fox), contemporary music (Pascal Dusapin, Alexandros Markeas, Ensemble Modern, Marta Gentilucci), and contemporary art (Le Fresnoy).",
                "date_modified": "2026-02-23T11:56:29.425335+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 644,
                        "forum_user": 18360,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 448,
                                "membership": 644
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "jnika",
            "first_name": "Jerome",
            "last_name": "Nika",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2757,
                    "user": 18367,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dicy2-composing-music-interaction-with-generative-agents",
        "pk": 2757,
        "published": true,
        "publish_date": "2024-02-20T10:04:02+01:00"
    },
    {
        "title": "Exploring Creative Processes with the Wave Field Synthesis Collider by Yuko Ohara (Japan/UK)",
        "description": "This lecture explores the creative process behind spatial sound composition using the Wave Field Synthesis (WFS) Collider.\r\nThe piece \"broken garage\" was created from inside-piano sounds performed with mallets and beaters. More than thirty sounds were recorded using extended techniques, of which twenty-four were selected. The resulting sonic material evokes the broken garage door beneath a former residence in The Hague, and the lecture highlights compositional strategies developed in \"broken garage.\"",
        "content": "<p><span>This lecture presents the creative process and compositional strategies developed in the piece \"broken garage,\" realised with the Wave Field Synthesis (WFS) Collider. The piece is based on inside-piano sounds performed with a variety of mallets and beaters, resulting in a wide spectrum of resonances, percussive textures, and timbral nuances. More than thirty distinct sounds were recorded using extended techniques, from which twenty-four were carefully selected as the sonic material of the work.</span><br /><span>The title \"broken garage\" refers to the everyday soundscape of a malfunctioning garage door situated just beneath my former residence in The Hague. This acoustic reference provided both a conceptual and aural framework for the composition, connecting environmental sound memory with instrumental exploration.</span><br /><span>In the lecture, I will discuss the creative process of transforming these recorded materials into spatialised structures within the WFS environment. Emphasis will be placed on the interaction between compositional intention and spatial perception, including approaches to distributing sounds across the loudspeaker array, creating immersive spatial textures, and exploring sound trajectories.</span><br /><span>This presentation shares practical examples and reflections from the development of \"broken garage,\" touching on some of the challenges encountered during the process. It also discusses how Wave Field Synthesis (WFS) can expand compositional practice and offer new perspectives for artistic exploration in spatial sound.</span></p>",
        "topics": [
            {
                "id": 3469,
                "name": "extended techniques",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3468,
                "name": "inside-piano",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3138,
                "name": "spatial sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3467,
                "name": "Wave Field Synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3466,
                "name": "Wave Field Synthesis Collider",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 10787,
            "forum_user": {
                "id": 10784,
                "user": 10787,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Ohara_Yuko_Workspace_Photo_1.jpg",
                "avatar_url": "/media/cache/90/28/90286f286e4669b9fa731e6da7ba86d8.jpg",
                "biography": null,
                "date_modified": "2025-11-20T11:39:54.077436+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "composeryuko",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "exploring-creative-processes-with-the-wave-field-synthesis-collider-by-yuko-ohara-japanuk",
        "pk": 3751,
        "published": true,
        "publish_date": "2025-10-03T10:42:32+02:00"
    },
    {
        "title": "Le son 3D pour les expériences XR -  Charles Verron, Noise Makers",
        "description": "Conférence le vendredi 22 mars de 10h30 à 11h, en salle Stravinsky. \r\nLes ateliers le vendredi 22 mars deux sessions de 45 minutes de 11h15 à 12h45, en salle Nono.\r\n\r\nCette conférence présentera les technologies audio 3D développées chez Noise Makers pour la post-production et les expériences immersives.\r\n\r\nDeux sessions d'atelier suivront la présentation, avec des démonstrations sur un casque VR.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"330\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par&nbsp;: Charles Verron - Noise Makers&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/cverron/\">Biographie</a></p>\r\n<p>Cette conf&eacute;rence pr&eacute;sentera les technologies audio 3D d&eacute;velopp&eacute;es chez Noise Makers pour la post-production et les exp&eacute;riences immersives.</p>\r\n<p>Apr&egrave;s une br&egrave;ve introduction sur le binaural, l'ambisonique et l'auralisation, des cas d'usage pratiques seront d&eacute;montr&eacute;s &agrave; travers trois projets XR : L'Op&eacute;ra Immersif, concert immersif dans un moteur de jeu (2019) - L'audioguide de l'H&ocirc;tel de la Marine, impliquant un rendu binaural in-situ suivi par la t&ecirc;te (2021) et l'exp&eacute;rience VR de Saint-Gobain Building Acoustics (2023).</p>\r\n<p>Deux sessions d'atelier suivront la pr&eacute;sentation, avec des d&eacute;monstrations sur un casque VR. Les plugins Noise Makers seront pr&eacute;sent&eacute;s et utilis&eacute;s par les participants pour cr&eacute;er des bandes sonores binaurales et ambisoniques. Les applications comprennent la musique immersive et la r&eacute;alit&eacute; &eacute;tendue.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 63744,
            "forum_user": {
                "id": 63677,
                "user": 63744,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f5b38db4a3fa67118263cc6b386764d2?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-07-31T15:49:49.292587+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cverron",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "3d-audio-for-xr-experiences",
        "pk": 2714,
        "published": true,
        "publish_date": "2024-02-07T15:21:09+01:00"
    },
    {
        "title": "Current directions in computer music software development at sonicLAB by Sinan Bokesoy",
        "description": "We are excited to present our newest release and upcoming titles, all of which focus on artificial network synthesis through input sound analysis.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1d505aa46fd61408d63d81205deb36b5.png\" width=\"826\" height=\"444\" /></p>\r\n<p>Presented by : Sinan Bokesoy&nbsp;</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/SinanBokesoy/\" target=\"_blank\">Biography</a></p>\r\n<p><strong>At sonicLAB, our recent developments focus on artificial network synthesis through input sound analysis. By transforming these networks into visible geometric structures and deploying spatial navigation experiences within this architecture, these advancements capture the generative essence of sonic creation while pushing the boundaries of audiovisual programming techniques.</strong></p>\r\n<p><strong>We are excited to present PolyNodes&mdash;a collaboration between Sinan Bokesoy&mdash;and two new software projects currently in development to the Forum community.</strong></p>\r\n<p><strong></strong></p>\r\n<p>&nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/16c07f75d6d19620a93ea5fb6004b5e5.png\" width=\"752\" height=\"737\" /></p>\r\n<p>March, 26th</p>",
        "topics": [
            {
                "id": 2649,
                "name": "audiovisual design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2648,
                "name": "generative audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2647,
                "name": "network synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2652,
                "name": "polynodes",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2654,
                "name": "protean",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2650,
                "name": "software development",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2651,
                "name": "sonic art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2653,
                "name": "SSNN",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 15446,
            "forum_user": {
                "id": 15443,
                "user": 15446,
                "first_name": "Sinan",
                "last_name": "Bokesoy",
                "avatar": "https://forum.ircam.fr/media/avatars/sinanportre_png.png",
                "avatar_url": "/media/cache/91/1d/911d705a8e8a4fc32df04be63c997ed8.jpg",
                "biography": "Sinan Bokesoy is an engineer, developer, and sound artist with a PhD in computer music. As the founder of sonicLAB/sonicPlanet, he has transformed his academic expertise into practical tools for composers and producers, designing software instruments that integrate algorithmic approaches with mathematical models and physical processes to create self-evolving sonic structures. Bokesoy’s work has been published and presented at numerous academic institutions and artistic events. Recognized with awards for his innovative developments, he bridges artistic creativity, scientific exploration, and technological innovation—carving out a niche in the audio tech industry.",
                "date_modified": "2026-03-02T17:03:48.699325+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "SinanBokesoy",
            "first_name": "Sinan",
            "last_name": "Bokesoy",
            "bookmarks": []
        },
        "slug": "current-directions-in-computer-music-software-development-at-soniclab",
        "pk": 3291,
        "published": true,
        "publish_date": "2025-02-16T20:41:47+01:00"
    },
    {
        "title": "Dromos/Autos : L'ontologie autistique en tant que performance - Matt Rogerson",
        "description": "Art viscéral audio-visuel en direct, tirant parti de la convergence édifiante de la musique bruyante avec les processus électroacoustiques, le neurofeedback et les TSA (troubles du spectre autistique).",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Matt Rogerson&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/mattrogerson/\">Biographie</a></p>\r\n<p>L'objectif de ce projet de recherche et de performance est d'explorer comment la surcharge sensorielle facilit&eacute;e par l'EEG (&eacute;lectroenc&eacute;phalographie) et le neurofeedback peut conduire &agrave; de nouveaux paradigmes de performance, par le biais d'une interface particuli&egrave;rement idiosyncrasique mais r&eacute;v&eacute;latrice de l'expression musicale : la cognition autistique. La recherche adopte une m&eacute;thodologie interdisciplinaire, bas&eacute;e sur la pratique, incorporant des facettes de la musique &eacute;lectronique g&eacute;n&eacute;rative, de la psychoacoustique, des visuels audio-r&eacute;actifs et de l'art de la performance/endurance. Le travail principal de la recherche comprend la conception de sons \"provocateurs\", c'est-&agrave;-dire des sons con&ccedil;us pour induire une surcharge sensorielle sp&eacute;cifique &agrave; l'ontologie autistique de l'interpr&egrave;te/chercheur. Ces sons sont con&ccedil;us en fonction de param&egrave;tres &eacute;tablis par le biais d'une m&eacute;thodologie de recherche auto-ethnographique r&eacute;flexive. L'artiste a ensuite d&eacute;lib&eacute;r&eacute; sur les arrangements potentiels des sons, ce qui a influenc&eacute; la nature de leur programmation g&eacute;n&eacute;rative en tandem avec les donn&eacute;es EEG recueillies pour produire une boucle ouverte de \"neurofeedback\" ; une &eacute;cologie dynamique sonore et de performance &agrave; laquelle l'artiste est soumis.</p>\r\n<p>Le projet de performance utilise les technologies EEG et BCI (interface cerveau-ordinateur) disponibles dans le commerce pour cr&eacute;er une &eacute;cologie de la performance &eacute;lectroacoustique, qui se manifeste sous la forme d'une performance artistique d'endurance, dans laquelle les param&egrave;tres de \"syst&egrave;mes de provocation\" virtuels sur mesure produisent des &eacute;v&eacute;nements sonores qui sont d&eacute;termin&eacute;s et modul&eacute;s par les stimuli audio pr&eacute;c&eacute;dents via le neurofeedback. Les &eacute;v&eacute;nements sont con&ccedil;us pour stimuler la cognition autistique de l'interpr&egrave;te, dans la mesure o&ugrave; ils la \"provoquent\" dans un arr&ecirc;t sensoriel cognitif incarn&eacute;. Cette m&eacute;thode de performance transgressive vise &agrave; &eacute;lucider une repr&eacute;sentation auto-ethnographique des capacit&eacute;s d'augmentation des TSA, ainsi qu'&agrave; reconfigurer les m&eacute;tar&eacute;cits sociaux qui mettent l'ontologie autistique entre parenth&egrave;ses, la pathologisent &agrave; l'exc&egrave;s et lui &ocirc;tent tout pouvoir.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a></strong></p>\r\n<p></p>",
        "topics": [
            {
                "id": 1790,
                "name": "ASD",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1791,
                "name": "Autism",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 565,
                "name": "Biofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1786,
                "name": "EEG",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1794,
                "name": "Embodied Music Cognition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1787,
                "name": "Endurance Art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 272,
                "name": "Generative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1792,
                "name": "Interdisciplinary",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1788,
                "name": "Live Art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1785,
                "name": " Neurofeedback",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1789,
                "name": "Provocative Art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1793,
                "name": "Viscerality",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55131,
            "forum_user": {
                "id": 55068,
                "user": 55131,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Matt_Rogerson_Promo_Photo_1.png",
                "avatar_url": "/media/cache/da/e4/dae4b1593a0e868bc01c5dee68af063b.jpg",
                "biography": "Matt Rogerson is a neurodivergent sound artist and performer based in Leicester, UK. His research-practice investigates the interdisciplinary convergence of electroacoustic music, live art, sonic-autoethnography, biofeedback, audio-visual viscerality and disability studies; mediated via the practice of EEG/neurofeedback performance and informed by his own lived experience as a person diagnosed with an Autism Spectrum Disorder (ASD) to auto-ethnographically supplement his research. He holds both an undergraduate (BA) and postgraduate (MRes) degree from the Institute for Sonic Creativity, De Montfort University. Furthermore, he accommodates for his musical practice as a guitarist and improvisor by engaging in contemporary and experimental solo and ensemble projects.",
                "date_modified": "2025-09-22T21:37:31.520249+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "mattrogerson",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "dromosautos-the-autistic-ontology-as-performance",
        "pk": 2740,
        "published": true,
        "publish_date": "2024-02-15T21:03:39+01:00"
    },
    {
        "title": "Happy Accidents : Invite the Unexpected. A workshop by Sinan Bokesoy and Laurent Mialon",
        "description": "Happy Accidents: Invite the Unexpected is a workshop by Sinan Bokesoy and Laurent Mialon that delves into the creative power of unplanned moments in sound design. Participants will explore how unexpected gestures and chance occurrences can spark innovative sonic ideas using contemporary tools.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p><img src=\"/media/uploads/forumircam2025_soniclabimg.006.png\" alt=\"\" width=\"754\" height=\"754\" /></p>\r\n<p></p>\r\n<p>Presented by : Sinan B&ouml;kesoy and Laurent Mialon</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/SinanBokesoy/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>In the realm of sound design and composition, the phrase &ldquo;happy accident&rdquo; refers to those unexpected moments when unplanned or seemingly incorrect gestures yield unanticipated yet compelling results. From beginners experimenting in digital audio workstations to the pioneering works of avant-garde composers, these surprising instances can spark new directions and inspire entire pieces. Whether they arise from a slip in parameter settings, unfamiliarity with software tools, or a deliberate injection of randomness, &ldquo;happy accidents&rdquo; have proven to be powerful creative catalysts that bridge intuition, discovery, and formal exploration.</p>\r\n<p>Practically speaking, these serendipitous occurrences often happen when a composer or sound designer is working with software or hardware whose complexity is not yet fully understood. Accidentally routing signals in unintended ways or tweaking random parameters &nbsp;can produce vibrant textures and rhythms that might never emerge through a strictly methodical process. These moments highlight the expressive potential of &ldquo;not-knowing,&rdquo; where the journey of learning and experimentation fuels fresh creative ideas.</p>\r\n<p>Of course, harnessing chance is not merely about letting chaos reign. The most compelling results typically arise when the artist balances spontaneity with a sense of direction&mdash;defining a parameter space for randomness, isolating the most intriguing outcomes, and sculpting them into a cohesive whole. In this way, &ldquo;happy accidents&rdquo; become more than fleeting curiosities; they become the fertile ground for electronic music.</p>\r\n<p>In our workshop, participants will investigate recent sonicLAB software tools and experiment with their capabilities to yield surprising sonic results&mdash;learning how to refine and integrate these &ldquo;accidents&rdquo; into a personal artistic practice. By sharing our own experimentation experiences, we will foster an environment where the unexpected is welcomed, celebrated, and transformed into compelling formal expressions and compositional strategies.</p>\r\n<p>March, 27th</p>",
        "topics": [
            {
                "id": 2662,
                "name": "accidents",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2660,
                "name": "aleatoric design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2661,
                "name": "generative sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2659,
                "name": "randomness",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 15446,
            "forum_user": {
                "id": 15443,
                "user": 15446,
                "first_name": "Sinan",
                "last_name": "Bokesoy",
                "avatar": "https://forum.ircam.fr/media/avatars/sinanportre_png.png",
                "avatar_url": "/media/cache/91/1d/911d705a8e8a4fc32df04be63c997ed8.jpg",
                "biography": "Sinan Bokesoy is an engineer, developer, and sound artist with a PhD in computer music. As the founder of sonicLAB/sonicPlanet, he has transformed his academic expertise into practical tools for composers and producers, designing software instruments that integrate algorithmic approaches with mathematical models and physical processes to create self-evolving sonic structures. Bokesoy’s work has been published and presented at numerous academic institutions and artistic events. Recognized with awards for his innovative developments, he bridges artistic creativity, scientific exploration, and technological innovation—carving out a niche in the audio tech industry.",
                "date_modified": "2026-03-02T17:03:48.699325+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "SinanBokesoy",
            "first_name": "Sinan",
            "last_name": "Bokesoy",
            "bookmarks": []
        },
        "slug": "happy-accidents-invite-the-unexpected-a-workshop-by-sinan-bokesoy-and-laurent-mialon",
        "pk": 3293,
        "published": true,
        "publish_date": "2025-02-17T07:41:53+01:00"
    },
    {
        "title": "Transformer l'intensité lumineuse en sons - light.void~ dans des particules hypothétiques - Anne Liao Zouning",
        "description": "Et si les dimensions de la musique pouvaient être contrôlées par des changements d'intensité lumineuse ? Et si des lampes de poche pouvaient transformer le paysage sonore naturel d'un coup de tonnerre en un synthétiseur chaotique, et transformer des gouttes de pluie pointillistes en accords harmonieux ? Cette présentation comprendra une performance de six minutes de la pièce Hypothetical particles, une brève discussion sur l'instrument musical numérique dépendant de la lumière light.void~, la réactivité action-son, la cartographie données-sons dans mon travail.",
        "content": "<p><em><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></em></p>\r\n<p><em><br /></em>Pr&eacute;sent&eacute; par&nbsp;: Anne Liao Zouning<br /><a href=\"https://forum.ircam.fr/profile/annel/\">Biographie</a></p>\r\n<p>En physique, les <em>particules hypoth&eacute;tiques</em> sont des particules dont l'existence n'a pas encore &eacute;t&eacute; observ&eacute;e et prouv&eacute;e. Cependant, ces particules sont n&eacute;cessaires pour maintenir la coh&eacute;rence d'une th&eacute;orie physique donn&eacute;e. Dans cette composition, j'explore ce ph&eacute;nom&egrave;ne en examinant l'interaction entre les particules de lumi&egrave;re et de son. Les amplitudes des lumi&egrave;res d&eacute;clenchent des changements dans la musique, r&eacute;v&eacute;lant des liens entre les domaines naturels et synth&eacute;tiques du son.</p>\r\n<p></p>\r\n<p>Pour faciliter cette exploration, j'ai cr&eacute;&eacute; un photo-contr&ocirc;leur num&eacute;rique inspir&eacute; du light.void~ con&ccedil;u par Felipe Tovar-Henao, qui est actuellement chercheur postdoctoral en composition musicale au College-Conservatory of Music de l'Universit&eacute; de Cincinnati aux &Eacute;tats-Unis. Son it&eacute;ration de light.void~ a &eacute;t&eacute; reconnue comme une \"r&eacute;plique inf&eacute;r&eacute;e\" de la <em>chose lumineuse</em> de Leafcutter John.</p>\r\n<p></p>\r\n<p>&nbsp;est un photocontr&ocirc;leur num&eacute;rique fait sur mesure qui utilise 16 r&eacute;sistances d&eacute;pendant de la lumi&egrave;re et fonctionne avec la carte de microcontr&ocirc;leur Arduino MEGA 2560. Chaque capteur transmet des valeurs de donn&eacute;es de 10 bits &agrave; Max/MSP, o&ugrave; elles sont converties en nombres &agrave; virgule flottante et mises &agrave; l'&eacute;chelle dans une plage de 0 &agrave; 1. Les donn&eacute;es transmises refl&egrave;tent l'intensit&eacute; de la lumi&egrave;re d&eacute;tect&eacute;e par les capteurs, les valeurs les plus &eacute;lev&eacute;es correspondant &agrave; une plus grande intensit&eacute; lumineuse. Ce dispositif sert &agrave; la fois de r&eacute;cepteur et d'&eacute;metteur de donn&eacute;es li&eacute;es &agrave; l'intensit&eacute; lumineuse observ&eacute;e.</p>\r\n<p></p>\r\n<p>En utilisant 16 flux de donn&eacute;es individuels, je les ai mis en correspondance avec diverses techniques de rendu sonore, notamment la granulation, le filtrage en peigne et les distorsions. J'ai assign&eacute; une r&eacute;sistance d&eacute;pendant de la lumi&egrave;re pour faire passer la musique &agrave; la section suivante, de la m&ecirc;me mani&egrave;re qu'avec une p&eacute;dale MIDI. Les diff&eacute;rents gestes refl&egrave;tent les changements d'intensit&eacute; de la lumi&egrave;re, qui correspondent &agrave; des changements dans le son. Par exemple, en rapprochant la lampe de poche de light.void~, on obtient une plus grande densit&eacute; de granulation.<br /><br /><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>\r\n<p></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1820,
                "name": "interactive live electronics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1819,
                "name": "light instrument",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 29986,
            "forum_user": {
                "id": 29958,
                "user": 29986,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/AL_square_headshot.jpg",
                "avatar_url": "/media/cache/fc/9b/fc9bf2c83d0d941a94e9e9791b710aa4.jpg",
                "biography": "Born in Guangdong, China, Zouning’s music draws inspiration from her fascination with nature and technology, blended with a constant curiosity about the playing capacity of instruments. She endeavors to incorporate unexpected and everyday sounds into her music. \n\nHer music has been performed in the United States, France, China, and England. In 2024, her work will be featured at the IRCAM Forum Workshop 2024, SEAMUS/Sweetwater at Charlottesville 2024, Performing Media Festival as well as EMM2024. She was honored to also be featured in Musicacoustica Hangzhou Electronic Music Festival 2023, CampGround23, Turn Up 2023, SPLICE Festival V, and Everyday is Spatial 2023, New York City Electroacoustic Music Festival (2022), SEAMUS national conference in (2021, 2022), National Student Electronic Music Event (2021), and the Society of Composers Inc. (2021).  Zouning was named a finalist in the ASCAP/ SEAMUS Student Composer Commission Competition in 2021.\n\nZouning is currently pursuing a master’s degree with double majors in electronic music composition and music theory at Indiana University Jacobs School of Music. She also serves as an Associate Instructor of Music Theory and teaches writte",
                "date_modified": "2025-12-01T07:34:55.281301+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 705,
                        "forum_user": 29958,
                        "date_start": "2023-06-20",
                        "date_end": "2024-06-20",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "annel",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "transforming-light-intensity-into-sounds-lightvoid-in-hypothetical-particles",
        "pk": 2751,
        "published": true,
        "publish_date": "2024-02-16T18:28:51+01:00"
    },
    {
        "title": "The Future is Here: Listening to Space",
        "description": "This article was presented at the ISEA 2024 Forum, a prominent international event in the field of art and technology.\n\nExplores the intricate relationship between sound, technology and our environment, offering insights into the evolution of immersive sound narratives. It delves into the historical and cultural aspects of the technology, drawing parallels between ancient milestones such as the mastery of fire and contemporary immersive experiences.\nThe text explores how immersive technologies reshape our perception of space and time, analyzing the impact of next-generation audio innovations such as spatial audio and object-based sound. Emphasizes the socioeconomic factors that drive the desire for immersive experiences and addresses their implications on society.\nAdditionally, it reflects on ancient immersive practices such as the temazcal ceremony and its relevance to contemporary discussions of immersive audio. The summary addresses the evolution of immersive audio, taking into consideration not only the forms of production of these new narratives, but also the experience on the part of the user, highlighting the role of independent artists in the configuration of immersive sound narratives.\nIt emphasizes the convergence of art and technology, envisioning immersive audio as a conduit to deepen the deepest connections with our environments.",
        "content": "<blockquote>\n<div style=\"text-align: center;\">\"The future is there,\" Cayce hears herself say, \"looking back at us. Trying to make sense of the fiction we will have become. And from where they are, the past behind us will look nothing at all like the past we imagine behind us now.\"<br>William Gibson</div>\n</blockquote>\n<div>\n<div><br>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79012f\">\n<p>Have you ever closed your eyes and experienced the feeling of sound enveloping you, immersing you in a unique sensory experience? Exploring the ever-evolving relationship between sound, technology, and our environment provides precise insights into the future landscape of immersive sonic narratives.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c790474\">\n<p>When we talk about technology, we refer to a specific combination of knowledge, tools, and techniques that emerge within a particular social context. Now, as we explore the historical and cultural aspects, it's essential to recognize how these technologies have evolved over time.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7906d6\">\n<p>Ancient practices, rooted in early human experiences, serve as an unexpected mirror to our modern immersion encounters. Around 2.0 million years ago, humans succeeded in mastering fire, marking a significant milestone in our technological evolution. This transformative milestone ushered in a new way of understanding and interacting with our environment. Fire, once a focal point for community gatherings and illumination, altered perceptions of time and space.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c790966\">\n<p>Technologies bring about significant transformations in our lives, each with varying degrees of impact. Let's consider the adoption of fire: a double-edged sword. While it brought light and warmth, it also led to an increase in respiratory infections due to daily exposure to smoke, an unprecedented phenomenon in human history. Research suggests that the controlled use of fire among early humans created favorable conditions for the transmission of tuberculosis.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c790c6f\">\n<p>Similarly, immersive technologies, much like fire, reshape our perception of time and space, fostering new patterns of behavior. In contemporary terms, our connection with diverse and constant sound sources has reached an unprecedented level. A multitude of auditory stimuli rhythmically punctuate our daily lives, with our minds, like a switch, toggling us in and out of the acoustic environment that simultaneously links us to physical and virtual spaces.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c791036\">\n<p>This toggling effect shapes our experience of spaces; at times, we're wholly present within the acoustic realm, feeling connected to our surroundings through sound. Other times, our thoughts take precedence, momentarily disconnecting us from the auditory world around us. This dynamic interplay between mental engagement and detachment underscores the intricate ways our minds interact with acoustic environments, influencing our perception of space and our experiences within it.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7913ed\">\n<p>Let's consider the parallels between mastering fire, which transformed how early humans experienced their environment, and the contemporary changes in our immersion experiences. Whether it's fire in the past or immersive audio in the present, these technologies possess the profound ability to shape our sensory experiences and redefine our relationship with space and time.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c791782\">\n<p>Advancements in spatial audio technology redefine how we perceive sound within a three-dimensional space, altering our understanding of directionality and depth.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c791b30\">\n<p>These innovations not only transform how we relate to sound but also revolutionize our connection with the environments we inhabit, fundamentally reshaping our sensory experiences of space.</p>\n<h2><br><br>Inundare<br><br></h2>\n<blockquote>\n<p style=\"text-align: center;\">\"The echo of footsteps on a paved street carries emotional weight because the sound reverberating off the surrounding walls places us in direct relation to space; sound measures space and renders its scale understandable. With our ears, we caress the boundaries of space.\"<br>Juhani Pallasmaa<br><br></p>\n</blockquote>\n<p>&nbsp;</p>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7922d0\">\n<p>The concept of 'immersive' shares its etymological roots with the words 'immerse' and 'inundate'. These three notions are used to convey the sensation of being absorbed or deeply involved in an experience.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c792677\">\n<p>The term 'inundate' has its roots in the ancient Latin word 'inundāre', vividly evoking the act of saturating a space with water or some other fluid substance.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div>\"To fill a territory with elements, beings, or people that were not there before or were not from there.&rdquo;</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c792a2a\">\n<p>\"Next-generation audio\" is a term that encapsulates the forefront of audio technology alongside innovations in audiovisual productions, enabling a deeply immersive and interactive auditory experience. This term, next-generation audio, represents a convergence of technological advancements and innovative techniques in content creation, fostering an environment where users are transported to a three-dimensional sonic landscape.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c792dcd\">\n<p>It encompasses advancements in various domains, including audio processing, spatial audio, immersive soundscapes, object-based audio, personalized audio, and interactive audio technologies.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79316c\">\n<p>Spatial audio is a central element of these new technologies, employing sophisticated techniques to create a three-dimensional audio experience, enriching how listeners perceive the direction, distance, and depth of sound. Through precise placement and movement of audio sources, it enhances immersion and spatial realism, transporting the audience to realistic sonic spaces. This technology impeccably replicates the behavior of sound in the real world.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79352b\">\n<p>Object-based audio has revolutionized sound design and consumption. It allows sound designers to treat individual audio components as discrete \"objects,\" each with unique metadata encompassing position, movement, and audio characteristics. This approach completely transforms production methods and offers adaptive capabilities where content dynamically adjusts to various playback systems and listener preferences. Object-based audio intensifies the sense of immersion by providing personalized auditory experiences precisely tailored to user environments.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7938c8\">\n<p>These innovations extend across a wide range of fields, including virtual reality (VR) and augmented reality (AR), immersive installations, applications in gaming and gamification, live performances, concerts, cinema, educational simulations, healthcare applications, broadcasting, and podcasting.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c793c69\">\n<p>Most technologies associated with immersive audio are far from innovative, just as the fundamental human inclination to immerse ourselves in immersive narratives isn't innovative either.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c794008\">\n<p>This inclination has been a constant throughout human history, whether within the dancing shadows of an illuminated cave or through modern devices that expand the boundaries of our sensory experiences.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7943a0\">\n<p>Yet, in the last five years, the longing for immersion, the desire to be completely absorbed by an experience that surpasses the limits of reality and plunges us into an alternative space, has become insatiable.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c794757\">\n<p>What socio-economic factors contribute to this expansion and inclination to move away from reality and immerse ourselves in a parallel one? Why is immersive audio gaining importance in this era and how does it influence sound production methods? Additionally, what impact do independent and experimental artists have on shaping immersive sonic narratives?</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c794af0\">\n<p>Current immersive experiences mark the beginning of a new technological dimension that carries profound political, economic, and social implications. They represent the pinnacle of technical advancement, submerging us in a vast sea of data.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c794eca\">\n<p style=\"text-align: center;\">As content producers, we should not overlook the historical and social backdrop in which these changes unfold, as immersive productions are fundamentally geared towards meeting the need to be absorbed by potential parallel realities.</p>\n<blockquote>\n<p style=\"text-align: center;\"><br><br>\"Immersion is often just another word for enclosure. Once the claims of immediacy and presence made for virtual reality are stripped away, what remains is a device that blocks perceptual access to the immediate world. Most critical discourse and industry talk about virtual reality focus on the vivid worlds created inside the headset. But what if the key to the cultural significance of virtual reality isn't the seemingly three-dimensional spaces eventually loaded, but this initial move to voluntarily sever sensory connections with the local environment? Understanding the cultural politics of virtual reality means grasping the politics of perceptual enclosure. What draws people to stay alone in a room with a brick-sized monitor strapped to their face? Why stare at glass and plastic for deeper meaning? What leads people to surrender nearly all spatial cues about their physical place in the world to a computer?\"<br>Paul Roquet<br><br></p>\n</blockquote>\n<p style=\"text-align: center;\">&nbsp;</p>\n<p style=\"text-align: left;\">Amidst this labyrinth of audio data, this perceptual redefinition of space, at the heart of the digital era, what truly defines the essence of sonic immersion? And why are we so in need of immersing ourselves in it?<br><br><br></p>\n<p style=\"text-align: left;\">&nbsp;</p>\n<h2 style=\"text-align: left;\">Surrounded by 'something' or 'someone'</h2>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79596e\">\n<p>Psychological immersion represents a state where we are absorbed by an activity; our attention becomes captivated to the extent that we temporarily lose awareness of the surroundings.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c795cf7\">\n<p>This cognitive state involves temporarily suspending disbelief and directing consciousness toward the immediate experience. The result is an emotional connection that enhances enjoyment.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c796083\">\n<p>In the book \"Immersion and Distance: Aesthetic Illusion in Literature and Other Media\" Werner Wolf intertwines the concept of aesthetic illusion with the concept of immersion. He defines aesthetic illusion as a particular imaginative response triggered by various forms of artwork, such as movies, texts, images, sculptures, performances, and more.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c796416\">\n<p style=\"text-align: center;\">This type of illusion involves a mental state where we immerse ourselves emotionally and mentally in a world created or suggested by the artwork. This mental state goes beyond merely appreciating the aesthetic quality of the work or being emotionally affected by it. It implies a strong connection between the experience and the content.</p>\n<blockquote>\n<p style=\"text-align: center;\"><br><br>\"The most important qualification of the particular state of mind which is termed &lsquo;aesthetic illusion&rsquo; is in fact an activation of the imagination&hellip;This means that one must have the impression of being confronted with (or be surrounded by) at least &lsquo;something&rsquo; or &lsquo;somebody&rsquo; &ndash; which is more than merely feeling a mood, an emotion, or a deep appreciation.\"</p>\n</blockquote>\n<p style=\"text-align: center;\">&nbsp;</p>\n<p style=\"text-align: center;\">&nbsp;</p>\n<p style=\"text-align: center;\">&nbsp;</p>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79678b\">\n<p>This is how a relationship with the artwork is established, demanding our cognitive contribution to the experience, and the extent of that aesthetic illusion significantly depends on our mental interaction. Within the state of 'aesthetic illusion', we perceive and encounter the represented world as if it were real.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c796b1b\">\n<p>However, we can question the nature of this world. Is it limited solely to what is represented? How does this illusion intersect with the reality from which we experience? In virtual spaces, what is reality?</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c796ead\">\n<p>At this point, it's worth highlighting the concept of being \"surrounded by at least something or someone,\" emphasizing the importance of spatial context. This leads us to reflect on the meaning of spatial perception and the sense of presence within an immersive sonic encounter.<br><br></p>\n<p>&nbsp;</p>\n<h2>Ancestral Virtualities: The temazcal Ceremony</h2>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7975c5\">\n<p>While the term 'immersive' often conjures images of virtual reality devices and digital simulations, the essence of immersion extends far beyond the boundaries of the virtual realm. In fact, some of the most immersive encounters with sound have deep historical roots that transcend the electronic and digital realms.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79793c\">\n<p>Let's immerse ourselves in the ancestral immersive experience of the temazcal ceremony in Mexico.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c797c75\">\n<p>In the temazcal, spatial perception is everything. The architectural space of the temazcal resembles a small dome-shaped structure made of adobe, often located in a natural setting. The rounded structure symbolizes the womb of Mother Earth and is intentionally small to retain heat during the ritual.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c797ff2\">\n<p>The temazcal holds profound cultural significance in Mexican traditions, rooted in the ancient practices of indigenous civilizations. It's considered a sacred space to connect with the spiritual realm, purify the body and soul, and honor the elements of nature. The ritual use of heat, steam, herbs, and sound makes this experience a symbolic journey of renewal.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79836e\">\n<p>As the heat envelops the bodies, the echoes of chants and rattles grow deeper, transporting participants to an environment where time seems to blur alongside the increasing warmth. The immersive qualities of sound within this enclosed space amplify the transformative nature of the ritual, enhancing the sensory experience by creating an atmosphere that blurs the boundaries between the physical and spiritual realms.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7986e8\">\n<p>The temazcal is a testament to the profound influence of sound on our perception of space, an ancient precursor to modern immersive audio technologies.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c798a68\">\n<p>Within its walls of earth, sound becomes a transformative force. The singing of traditional songs, the rhythmic percussion of drums, and the crackling of hot stones conspire to create an immersive experience that transcends time and space. An experience that engages all the senses.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c798df3\">\n<p>This historical context illustrates how sound has played a fundamental role in shaping our perception of space and creating immersive experiences long before the advent of digital technologies. The temazcal ceremony serves as an archetype for understanding the profound impact of sound on our spatial awareness and its ability to transcend temporal and physical boundaries.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79916d\">\n<p>Incorporating these insights into contemporary discussions on immersive audio allows us to appreciate that the power of sound to transform our understanding of space isn't a recent development. Sound has been interwoven into the tapestry of human culture for centuries, deeply influencing our relationship with the environments we inhabit. This ancient ceremony reminds us that the roots of immersive audio experiences are deeply embedded in our cultural history and offer valuable perspectives for exploring the role of sound in contemporary immersive technologies.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c7994df\">\n<p>Moreover, this historical perspective offers an alternative approach to sound narratives and the creation of immersive productions that bring the audience closer and connect them with the physical environment, bridging the gap between the virtual and physical worlds.</p>\n<p>&nbsp;</p>\n<h2>Data: the key to the future of immersive audio</h2>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c799bb9\">\n<p>The popularity of immersive audio experiences is the result of the convergence of various technologies, including virtual and augmented reality, along with the advancement of increasingly sophisticated algorithms. This, in turn, has led to the creation of high-quality audio equipment.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c799f7e\">\n<p>The popularity of immersive audio experiences is the result of the convergence of various technologies, including virtual and augmented reality, along with the advancement of increasingly sophisticated algorithms. This, in turn, has led to the creation of high-quality audio equipment.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79a2d7\">\n<p>These advancements encompass not only state-of-the-art sound reproduction but also interactive and personalized audio that responds to user movements and choices; this significantly influences audience attitudes and consumption patterns where dynamic and interactive experiences that can be easily shared are sought after. All of this has accelerated the growth of immersive experiences.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79a5d9\">\n<p>This convergence space concerns us not only as users, viewers, or audience but also as content producers, storytellers.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79a807\">\n<p>This convergence space concerns us not only as users, viewers, or audience but also as content producers, storytellers.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79aa1c\">\n<p>The evolution of immersive audio experiences spans an increasingly broad horizon, starting from traditional surround sound and binaural technologies to cutting-edge innovations in spatial audio. It goes beyond audio systems and encompasses physical spaces, such as the development of sophisticated domes designed to create fully immersive audiovisual environments.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79ac12\">\n<p>Furthermore, the adoption of next-generation audio production tools by artists, producers, and engineers has a profound impact on traditional workflows. These tools introduce increasing complexity and precision, and in some cases, even possess decision-making capabilities, automating tasks and significantly accelerating production times.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79ae12\">\n<p>Just as audiences lose spatial signals of their physical location and immerse themselves in a parallel reality, algorithms seamlessly infiltrate our creative space, reshaping the landscape of production and composition. They transform our sonic works and redefine our relationship with sound, space, and the artistic process.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79aff9\">\n<p>Simultaneously, a growing demand for consuming immersive experiences calls for a deeper understanding of the user and the space where the experience will be projected. Factors such as the number of speakers, types of headphones, listening moments, preferences, and tastes play a crucial role in tailoring the experience to the user. As consumers, we share a significant amount of data to make our experiences increasingly immersive.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79b1e6\">\n<p>As we delve deeper into the future of immersive sound, it becomes imperative to address complex issues related to access, privacy, and data inclusion. Generating debates about access and data distribution in immersive audio experiences, involving both users and producers, is crucial. Simultaneously, fostering privacy through industry regulations to create a secure and inclusive environment benefits both creators and users.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79b3e2\">\n<p>The future of immersive audio marks the beginning of a paradigm shift that extends far beyond technological innovation. On one hand, it empowers independent creators, granting them access to sophisticated tools that were once exclusive to industry giants. This democratization fosters a diverse narrative landscape, enriching the global storytelling fabric. However, this increased accessibility can also lead to a content overflow, saturating the audience with a plethora of immersive experiences. While this content inundation reflects the positive impact of accessibility, it poses the risk of overwhelming audiences, potentially diluting the quality of engagement and reaching a saturation point where the sheer volume becomes daunting to navigate and interact with effectively.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79b613\">\n<p>The future of immersive audio marks the beginning of a paradigm shift that extends far beyond technological innovation. On one hand, it empowers independent creators, granting them access to sophisticated tools that were once exclusive to industry giants. This democratization fosters a diverse narrative landscape, enriching the global storytelling fabric. However, this increased accessibility can also lead to a content overflow, saturating the audience with a plethora of immersive experiences. While this content inundation reflects the positive impact of accessibility, it poses the risk of overwhelming audiences, potentially diluting the quality of engagement and reaching a saturation point where the sheer volume becomes daunting to navigate and interact with effectively.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79b825\">\n<p>At the same time, the potential of immersive audio in education and development, especially in countries and sectors outside the first world, signifies a gateway to democratizing knowledge. These technologies offer transformative educational tools, fostering innovation and creativity. However, the pronounced digital divide could hinder widespread access, creating disparities in skill development and impeding the realization of the full potential of these new tools across all sectors of audiovisual production.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79ba44\">\n<p>In this context, independent and experimental artists play a crucial role in shaping immersive sonic narratives. Their inclination to push boundaries and explore unconventional soundscapes fosters innovation within the field. They challenge established norms and experiment with new technologies and techniques to create unique immersive experiences that transcend boundaries. Their influence fosters diversity, creativity, and the evolution of immersive soundscapes, contributing significantly to the exploration and expansion of narrative possibilities.</p>\n<p>&nbsp;</p>\n<h2>Between the physical and virtual space and sound</h2>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79be70\">\n<p>Recognizing how sound shapes our understanding of space, whether in a digital landscape or sitting in a centuries-old cabin, brings us closer to understanding the profound transformations within the realm of next-generation audio and immersive technologies.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79c05d\">\n<p>The power of immersive sound intertwines technology, history, and culture, reshaping our perception and interaction with surrounding spaces. It transcends screens and headphones, urging us to explore spatial perceptions and the fundamental role of sound within them.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79c2cc\">\n<p>The future of immersive audio involves not only navigating the search for technological innovation but also understanding the reasons behind our immersion in the vast sea of data, perceptions, and human understanding. Here, art and technology converge to create narratives that not necessarily distance us from the real world but foster a deeper human connection with the environment around us.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79c584\">\n<p>Diving deeper into these concepts reveals intricate layers of the essence of sound. It emerges not merely as an auditory experience but as a conduit intertwining historical narratives, cultural meanings, and technological advancements. Immersive audio becomes a means to capture the evolving perceptual landscape and our immersive engagement with these transformations.</p>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79c836\">\n<p>This exploration presents a unique opportunity: a chance to redefine our relationship with both tangible and virtual environments. It paves the way to reimagine our connection with space, uniting our past, present, and future. It opens doors to a profound understanding of how we relate to our surroundings.</p>\n&lt;table style=\"border-collapse: collapse; width: 100%;\" border=\"1\"&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td style=\"width: 50%;\"&gt;&nbsp;&lt;/td&gt; &lt;td style=\"width: 50%;\"&gt;&nbsp;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td style=\"width: 50%;\"&gt;\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79cb2f\">\n<h4>References</h4>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79cdfc\">\n<p>William Gibson, Pattern recognition (Penguin, 2003),48. Chisholm, R. H., Trauer, J. M., Curnoe, D., &amp; Tanaka, M. M. (2016). Controlled fire use in early humans might have triggered the evolutionary emergence of tuberculosis. Proceedings of the National Academy of Sciences of the United States of America, 113(32), 9051-9056. https://doi.org/10.1073/pnas.1603224113 Juhani Pallasmaa, Los ojos de la piel : la arquitectura y los sentidos (Gustavo Gili, 2012), 62. Janet H. Murray, Hamlet on the Holodeck, updated edition: The Future of Narrative in Cyberspace (MIT Press, s.&nbsp;f.). Real Academia Espa&ntilde;ola, Diccionario de la Lengua Espa&ntilde;ola, 2001. Paul Roquet, The immersive enclosure: Virtual Reality in Japan (Columbia University Press, 2022), 02. Werner Wolf, Walter Bernhart, y Andreas Mahler, Immersion and distance.: Aesthetic Illusion in Literature and Other Media. (Rodopi, 2013), 08.</p>\n</div>\n</div>\n</div>\n&lt;/td&gt; &lt;td style=\"width: 50%;\"&gt;\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79d0cc\">\n<h4>Bibliography</h4>\n</div>\n</div>\n</div>\n<div>\n<div>\n<div id=\"ld-fancy-heading-67d987c79d392\">\n<p>Dennis Baxter, Immersive Sound Production: A Practical Guide (CRC Press, 2022). Karen Collins, Bill Kapralos, y Holly Tessler, The Oxford Handbook of Interactive Audio (Oxford Handbooks, 2014). Peter H. Diamandis y Steven Kotler, The future is faster than you think: How Converging Technologies Are Transforming Business, Industries, and Our Lives (Simon and Schuster, 2020). David Dowling, Immersive longform storytelling: Media, Technology, Audience (Routledge, 2019). Karmen Franinovic y Stefania Serafin, Sonic interaction design (MIT Press, 2013). Michele Geronazzo y Stefania Serafin, Sonic interactions in virtual environments (Springer Nature, 2022). Juhani Pallasmaa, Los ojos de la piel : la arquitectura y los sentidos (Gustavo Gili, 2012) Florian Freitag et&nbsp;al., Immersivity: an interdisciplinary approach to spaces of immersion, Ambiances, 11 de diciembre de 2020, https://doi.org/10.4000/ambiances.3233. William Gibson, Pattern recognition (Penguin, 2003) Mark Grimshaw y Tom Alexander Garner, Sonic virtuality: Sound as Emergent Perception (Oxford University Press, USA, 2015). Janet H. Murray, Hamlet on the Holodeck, updated edition: The Future of Narrative in Cyberspace (MIT Press, s.&nbsp;f.). Real Academia Espa&ntilde;ola, Diccionario de la Lengua Espa&ntilde;ola, 2001. Frank Rose, The Art of Immersion: how the digital generation is remaking Hollywood, Madison Avenue, and the way we tell stories (W. W. Norton &amp; Company, 2011). Paul Roquet, The immersive enclosure: Virtual Reality in Japan (Columbia University Press, 2022) Paul Roquet, Ambient media: Japanese Atmospheres of Self (U of Minnesota Press, 2016). Chisholm, R. H., Trauer, J. M., Curnoe, D., &amp; Tanaka, M. M. (2016). Controlled fire use in early humans might have triggered the evolutionary emergence of tuberculosis. Proceedings of the National Academy of Sciences of the United States of America, 113(32), 9051-9056. https://doi.org/10.1073/pnas.1603224113 Werner Wolf, Walter Bernhart, y Andreas Mahler, Immersion and distance.: Aesthetic Illusion in Literature and Other Media. (Rodopi, 2013).</p>\n</div>\n</div>\n</div>\n&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt;\n<p>&nbsp;</p>\n</div>\n</div>\n</div>\n<p>&nbsp;</p>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>",
        "topics": [
            {
                "id": 70,
                "name": "Audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 232,
                "name": "Audio 3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2765,
                "name": "future",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2341,
                "name": "immersive audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1211,
                "name": "narrative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2766,
                "name": "Next Audio Generation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2767,
                "name": "perceptions",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 265,
                "name": "Sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 439,
                "name": "Spaces",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 149,
                "name": "Technology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 29278,
            "forum_user": {
                "id": 29250,
                "user": 29278,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sol-Rezza-05-2024-214x300.jpg",
                "avatar_url": "/media/cache/b1/29/b12985af83e892ca90cecaaaf693b3b9.jpg",
                "biography": "Sol Rezza is an Argentinian composer, sound designer and audio engineer. Her practice incorporates experimental electronics with spatial audio to create immersive experiences for virtual ecosystems and live performances.\nCombine multilingual voice samples, granular synthesis and sequencers with open-source multichannel audio technology like the SoundSquares plug-in.\nCurrently, she is developing research on how new technologies (AI, machine learning, VR, etc.) influence the creation and production of contemporary storytelling.\nRezza's work has been shown at MUTEK Montreal (CA), MUTEK (AR/ES), CTM Festival (DE), IN/OUT Festival, Tsonami Festival (CL), BRIWF festival (BR), Simultan Festival (RO), Borealis Festival (NO), HÖRLURS Festival (SE), among others. She participated in artist residencies including the Radio Art Residency at Radio Corax (DE) Somerset House Studios Residency (UK) and Binaural Nodar Residency (PT).",
                "date_modified": "2026-02-05T19:19:13.352241+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "solrezza",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 104,
                    "user": 29278,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "the-future-is-here-listening-to-space",
        "pk": 3361,
        "published": true,
        "publish_date": "2025-03-18T16:07:41.275592+01:00"
    },
    {
        "title": "Comprovisation as a compositional method for a neuroscience-inspired contemporary music show for toddlers aged 0 to 2 by Anne Chabot-Bucchi",
        "description": "In this presentation, I propose to outline the process of creating a contemporary musical experience for toddlers and the compositional method used, based on improvisation and to discuss how previous neuroscience literature has influenced this process.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p style=\"text-align: center;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/46d2036c14b284ea9b5a743bbac18143.jpeg\" width=\"631\" height=\"577\" /><span>&nbsp; &nbsp; &nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/fff1a0f8b78776ac9f15e5aff872d29a.jpeg\" width=\"538\" height=\"575\" /></p>\r\n<p style=\"text-align: left;\"></p>\r\n<p style=\"text-align: left;\">Presented by : Anne Chabot-Bucchi</p>\r\n<p style=\"text-align: left;\"><a href=\"https://forum.ircam.fr/profile/annechabotbucchi/\" target=\"_blank\">Biography</a></p>\r\n<p>Over the past decade, an increasing number of concerts have been designed for young children (Dyonissiou &amp; Fytika, 2017; Ben Moshe &amp; Gluchankof, 2021), offering them the opportunity to discover and explore music and engage in non-formal music learning (Creech et al., 2020). Infants, considered &ldquo;music connoisseurs&rdquo; from birth due to their excellent musical memory and early listening skills (Trehub &amp; Deg&eacute;, 2015), seem keenly interested in these musical performances (Barbosa et al., 2023; Kragness et al., 2023a; Kragness et al., 2023b). Recent studies of babies' engagement when attending a musical performance with their parents have shown that these little ones are able to remain engaged over a long period (Barbosa et al., 2023). Babies are more engaged when listening to songs than lullabies (Kragness et al., 2023) and when attending a live performance rather than a recording of the same performance (Kragness et al., 2023). While these exciting results indicate that babies may enjoy participating in musical performances where musicians perform tonal classical or children's music, we still need a clear understanding of how this engagement may vary depending on the musical style of the performance. To make progress in this area, I am currently conducting a research-creation project aimed at (1) creating an immersive musical experience in contemporary music, (2) documenting the child's engagement while participating in this musical experience, and (3) identifying factors that may influence the child's engagement while participating in this musical experience. The aim beyond this project is to make contemporary music accessible to as many people as possible. In this presentation, I propose to outline the process of creating this musical experience and the compositional method used, based on improvisation (phase 1), and to discuss how previous neuroscience literature has influenced this process.</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 96948,
            "forum_user": {
                "id": 96827,
                "user": 96948,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_6606.jpeg",
                "avatar_url": "/media/cache/ff/fd/fffd369a7d4c4df10e6fada6b0267be0.jpg",
                "biography": "As a percussionist, she is actively involved in teaching her instrument as well as in contemporary creation and new forms of improvisation, particularly in chamber music. She holds two Master's degrees in music (Interpretation and Pedagogy) and a Postgraduate diploma in chamber music from the Haute École de Musique de Genève under the supervision of Yves Brustaux, Jean Geoffroy and William Blank. She also holds an Artist Diploma from McGill University with Fabrice Marandola. At the same time, in order to understand brain function, she undertook studies in psychology. Passionate, her main aim is to make contemporary and new music accessible to as many people as possible. As a founding member of the Ensemble Muet (percussion quartet), the Ensemble Aukio (2 pianos/2 percussion) and Luo Musica (women's ensemble), she is constantly expanding her opportunities to present the music of today. As part of a PhD project in music research-creation that she is currently pursuing at the Université de Montréal under the supervision of Jean-Michaël Lavoie, she is developing a new approach to musical interpretation and teaching, particularly of contemporary music, in relation to brain development.",
                "date_modified": "2025-02-18T18:00:01.549162+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "annechabotbucchi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "comprovisation-as-a-compositional-method-for-a-neuroscience-inspired-contemporary-music-show-for-toddlers-aged-0-to-2-by-anne-chabot-bucchi",
        "pk": 3210,
        "published": true,
        "publish_date": "2025-02-03T11:28:46+01:00"
    },
    {
        "title": "789BET - Trang Chủ Đăng Nhập 789BET.COM Uy Tín Nhất 2026",
        "description": "789Bet là một nhà cái có giấy phép cá cược trực tuyến hợp pháp do Isle of Man và Khu kinh tế Cagayan and Freeport cấp. Với bề dày kinh nghiệm và danh tiếng phục vụ hơn 10 triệu người chơi. Tham gia 789Bet tại để nhận thưởng 200k vào tài khoản cược thật\nTHÔNG TIN CHI TIẾT:\n- Website: https://789betvip.black/\n- Địa chỉ: 103 Bàu Cát 2, Phường 12, Tân Bình, Thành phố Hồ Chí Minh\n- Email: 789BET@gmail.com\n- Hotline: 0337893310\n#789bet #linkmoi789bet #789betcasino #nhacai789bet #trangchu789bet #casino789bet #dangky789bet #789betcom #789bet #linkvao789betmoinhat",
        "content": "<p><a href=\"https://789betvip.black/\"><span style=\"\">789Bet</span></a><span style=\"\"> l&agrave; một nh&agrave; c&aacute;i c&oacute; giấy ph&eacute;p c&aacute; cược trực tuyến hợp ph&aacute;p do Isle of Man v&agrave; Khu kinh tế Cagayan and Freeport cấp. Với bề d&agrave;y kinh nghiệm v&agrave; danh tiếng phục vụ hơn 10 triệu người chơi. Tham gia 789Bet tại để nhận thưởng 200k v&agrave;o t&agrave;i khoản cược thật</span></p>\n<p><span style=\"\">TH&Ocirc;NG TIN CHI TIẾT:</span></p>\n<p><span style=\"\">- Website:</span><a href=\"https://789betvip.black/\"><span style=\"\"> </span><span style=\"\">https://789betvip.black/</span></a></p>\n<p><span style=\"\">- Địa chỉ: 103 B&agrave;u C&aacute;t 2, Phường 12, T&acirc;n B&igrave;nh, Th&agrave;nh phố Hồ Ch&iacute; Minh</span></p>\n<p><span style=\"\">- Email: 789BET@gmail.com</span></p>\n<p><span style=\"\">- Hotline: 0337893310</span></p>\n<p><span style=\"\">#789bet #linkmoi789bet #789betcasino #nhacai789bet #trangchu789bet #casino789bet #dangky789bet #789betcom #789bet #linkvao789betmoinhat</span></p>\n<p><a href=\"https://789betvip.black/\"><span style=\"\">https://789betvip.black/</span></a></p>\n<p><a href=\"https://x.com/789betvip\"><span style=\"\">https://x.com/789betvip</span></a></p>\n<p><a href=\"https://www.pinterest.com/789betvip/\"><span style=\"\">https://www.pinterest.com/789betvip/</span></a></p>\n<p><a href=\"https://www.tumblr.com/789betvip\"><span style=\"\">https://www.tumblr.com/789betvip</span></a></p>\n<p><a href=\"https://www.twitch.tv/789betvip/about\"><span style=\"\">https://www.twitch.tv/789betvip/about</span></a></p>\n<p><a href=\"https://www.reddit.com/789betvip\"><span style=\"\">https://www.reddit.com/789betvip</span></a></p>\n<p><a href=\"https://www.syncdocs.com/forums/profile/789betvip\"><span style=\"\">https://www.syncdocs.com/forums/profile/789betvip</span></a></p>\n<p><a href=\"https://beteiligung.amt-huettener-berge.de/profile/789betvip/\"><span style=\"\">https://beteiligung.amt-huettener-berge.de/profile/789betvip/</span></a></p>\n<p><a href=\"https://partecipa.poliste.com/profiles/789betvip/activity\"><span style=\"\">https://partecipa.poliste.com/profiles/789betvip/activity</span></a></p>\n<p><a href=\"https://decidim.calafell.cat/profiles/789betvip/activity\"><span style=\"\">https://decidim.calafell.cat/profiles/789betvip/activity</span></a></p>\n<p><a href=\"https://freeimage.host/789betvip\"><span style=\"\">https://freeimage.host/789betvip</span></a></p>\n<p><a href=\"https://akniga.org/profile/1406776-789betvip/\"><span style=\"\">https://akniga.org/profile/1406776-789betvip/</span></a></p>\n<p><a href=\"https://www.directorylib.com/domain/789betvip.black\"><span style=\"\">https://www.directorylib.com/domain/789betvip.black</span></a></p>\n<p><a href=\"https://fabble.cc/789betvip\"><span style=\"\">https://fabble.cc/789betvip</span></a></p>\n<p><a href=\"https://www.stylevore.com/user/betvip789\"><span style=\"\">https://www.stylevore.com/user/betvip789</span></a></p>\n<p><a href=\"https://beteiligung.stadtlindau.de/profile/789betvip/\"><span style=\"\">https://beteiligung.stadtlindau.de/profile/789betvip/</span></a></p>\n<p><a href=\"http://qa.doujiju.com/index.php?qa=user&amp;qa_1=789betvip\"><span style=\"\">http://qa.doujiju.com/index.php?qa=user&amp;qa_1=789betvip</span></a></p>\n<p><a href=\"https://participacion.cabildofuer.es/profiles/789betvip/activity?locale=en\"><span style=\"\">https://participacion.cabildofuer.es/profiles/789betvip/activity?locale=en</span></a></p>\n<p><a href=\"http://delphi.larsbo.org/user/789betvip\"><span style=\"\">http://delphi.larsbo.org/user/789betvip</span></a></p>\n<p><a href=\"https://bioimagingcore.be/q2a/user/789betvip\"><span style=\"\">https://bioimagingcore.be/q2a/user/789betvip</span></a></p>\n<p><a href=\"https://culturesbook.com/789betvip\"><span style=\"\">https://culturesbook.com/789betvip</span></a></p>\n<p><a href=\"https://es.stylevore.com/user/betvip789\"><span style=\"\">https://es.stylevore.com/user/betvip789</span></a></p>\n<p><a href=\"https://www.lingvolive.com/en-us/profile/b6e217c7-b731-4564-bd2e-79af8a5c1267/translations\"><span style=\"\">https://www.lingvolive.com/en-us/profile/b6e217c7-b731-4564-bd2e-79af8a5c1267/translations</span></a></p>\n<p><a href=\"http://www.muzikspace.com/profiledetails.aspx?profileid=132869\"><span style=\"\">http://www.muzikspace.com/profiledetails.aspx?profileid=132869</span></a></p>\n<p><a href=\"https://forum.issabel.org/u/789betvip\"><span style=\"\">https://forum.issabel.org/u/789betvip</span></a></p>\n<p><a href=\"https://kjtr.grrr.jp/kjtr/?789betvip\"><span style=\"\">https://kjtr.grrr.jp/kjtr/?789betvip</span></a></p>\n<p><a href=\"https://m.xtutti.com/user/profile/486385\"><span style=\"\">https://m.xtutti.com/user/profile/486385</span></a></p>\n<p><a href=\"https://ask.mallaky.com/?qa=user/789betvip\"><span style=\"\">https://ask.mallaky.com/?qa=user/789betvip</span></a></p>\n<p><a href=\"https://expatguidekorea.com/profile/789betvip/\"><span style=\"\">https://expatguidekorea.com/profile/789betvip/</span></a></p>\n<p><a href=\"https://raovat.nhadat.vn/members/789betvip-299121.html\"><span style=\"\">https://raovat.nhadat.vn/members/789betvip-299121.html</span></a></p>\n<p><a href=\"https://smallseo.tools/website-checker/789betvip.black\"><span style=\"\">https://smallseo.tools/website-checker/789betvip.black</span></a></p>\n<p><a href=\"http://inuofebi.com/question/789betvip/\"><span style=\"\">http://inuofebi.com/question/789betvip/</span></a></p>\n<p><a href=\"https://l2top.co/forum/members/789betvip.166682/\"><span style=\"\">https://l2top.co/forum/members/789betvip.166682/</span></a></p>\n<p><a href=\"https://support.bitspower.com/support/user/789betvip\"><span style=\"\">https://support.bitspower.com/support/user/789betvip</span></a></p>\n<p><a href=\"https://www.noteflight.com/profile/ce813256014f82ef372d33412efa4da8c4622ad8\"><span style=\"\">https://www.noteflight.com/profile/ce813256014f82ef372d33412efa4da8c4622ad8</span></a></p>\n<p><a href=\"https://www.sociomix.com/u/789betvip/\"><span style=\"\">https://www.sociomix.com/u/789betvip/</span></a></p>\n<p><a href=\"https://www.rwaq.org/users/789betvip\"><span style=\"\">https://www.rwaq.org/users/789betvip</span></a></p>\n<p><a href=\"https://l2top.co/forum/members/789betvip.166682/\"><span style=\"\">https://l2top.co/forum/members/789betvip.166682/</span></a></p>\n<p><a href=\"https://www.mapleprimes.com/users/789betvip1\"><span style=\"\">https://www.mapleprimes.com/users/789betvip1</span></a></p>\n<p><a href=\"https://decidim.santcugat.cat/profiles/789betvip/activity\"><span style=\"\">https://decidim.santcugat.cat/profiles/789betvip/activity</span></a></p>\n<p><a href=\"https://portfolium.com/789betvip\"><span style=\"\">https://portfolium.com/789betvip</span></a></p>\n<p><a href=\"https://forums.megalith-games.com/member.php?action=profile&amp;uid=1447516\"><span style=\"\">https://forums.megalith-games.com/member.php?action=profile&amp;uid=1447516</span></a></p>\n<p><a href=\"https://justpaste.it/u/789betvip2\"><span style=\"\">https://justpaste.it/u/789betvip2</span></a></p>\n<p><a href=\"https://portfolium.com.au/789betvip\"><span style=\"\">https://portfolium.com.au/789betvip</span></a></p>\n<p><a href=\"https://hker2uk.com/home.php?mod=space&amp;uid=5424482\"><span style=\"\">https://hker2uk.com/home.php?mod=space&amp;uid=5424482</span></a></p>\n<p><a href=\"http://jerseyboysblog.com/forum/member.php?action=profile&amp;uid=88348\"><span style=\"\">http://jerseyboysblog.com/forum/member.php?action=profile&amp;uid=88348</span></a></p>\n<p><a href=\"https://zeroone.art/profile/789betvip\"><span style=\"\">https://zeroone.art/profile/789betvip</span></a></p>\n<p><a href=\"https://dongnairaovat.com/members/789betvip.72175.html\"><span style=\"\">https://dongnairaovat.com/members/789betvip.72175.html</span></a></p>\n<p><a href=\"https://www.easyhits4u.com/profile.cgi?login=789betvip&amp;view_as=1\"><span style=\"\">https://www.easyhits4u.com/profile.cgi?login=789betvip&amp;view_as=1</span></a></p>\n<p><a href=\"https://www.pixiv.net/en/users/125048822\"><span style=\"\">https://www.pixiv.net/en/users/125048822</span></a></p>\n<p><a href=\"https://www.moshpyt.com/user/789betvip\"><span style=\"\">https://www.moshpyt.com/user/789betvip</span></a></p>\n<p><a href=\"https://seomotionz.com/member.php?action=profile&amp;uid=123822\"><span style=\"\">https://seomotionz.com/member.php?action=profile&amp;uid=123822</span></a></p>\n<p><a href=\"http://bbs.sdhuifa.com/home.php?mod=space&amp;uid=1093810\"><span style=\"\">http://bbs.sdhuifa.com/home.php?mod=space&amp;uid=1093810</span></a></p>\n<p><a href=\"https://hub.vroid.com/en/users/125048822\"><span style=\"\">https://hub.vroid.com/en/users/125048822</span></a></p>\n<p><a href=\"https://kitsu.io/users/1697454\"><span style=\"\">https://kitsu.io/users/1697454</span></a></p>\n<p><a href=\"https://videos.muvizu.com/Profile/789betvip/Latest/\"><span style=\"\">https://videos.muvizu.com/Profile/789betvip/Latest/</span></a></p>\n<p><a href=\"https://kooperation.winterthur.ch/profiles/789betvip/activity\"><span style=\"\">https://kooperation.winterthur.ch/profiles/789betvip/activity</span></a></p>\n<p><a href=\"https://forum.xorbit.space/member.php/15759-789betvip\"><span style=\"\">https://forum.xorbit.space/member.php/15759-789betvip</span></a></p>\n<p><a href=\"https://www.kickstarter.com/profile/789betvip1/about\"><span style=\"\">https://www.kickstarter.com/profile/789betvip1/about</span></a></p>\n<p><a href=\"https://toirscript.com/user/789betvip\"><span style=\"\">https://toirscript.com/user/789betvip</span></a></p>\n<p><a href=\"https://allods.my.games/forum/index.php?page=User&amp;userID=239057\"><span style=\"\">https://allods.my.games/forum/index.php?page=User&amp;userID=239057</span></a></p>\n<p><a href=\"https://www.speedway-world.pl/forum/member.php?action=profile&amp;uid=457389\"><span style=\"\">https://www.speedway-world.pl/forum/member.php?action=profile&amp;uid=457389</span></a></p>\n<p><a href=\"https://789betvip01.mystrikingly.com\"><span style=\"\">https://789betvip01.mystrikingly.com</span></a></p>\n<p><a href=\"https://comicspace.jp/profile/789betvip\"><span style=\"\">https://comicspace.jp/profile/789betvip</span></a></p>\n<p><a href=\"https://www.anibookmark.com/user/789betvip1.html\"><span style=\"\">https://www.anibookmark.com/user/789betvip1.html</span></a></p>\n<p><a href=\"https://www.vaingloryfire.com/profile/789betvip1/bio?profilepage\"><span style=\"\">https://www.vaingloryfire.com/profile/789betvip1/bio?profilepage</span></a></p>\n<p><a href=\"http://scenarch.com/userpages/31134\"><span style=\"\">http://scenarch.com/userpages/31134</span></a></p>\n<p><a href=\"https://ixawiki.com/link.php?url=https://789betvip.black/\"><span style=\"\">https://ixawiki.com/link.php?url=https://789betvip.black/</span></a></p>\n<p><a href=\"https://mez.ink/789betvip\"><span style=\"\">https://mez.ink/789betvip</span></a></p>\n<p><a href=\"https://app.staffmeup.com/profile/789betvip\"><span style=\"\">https://app.staffmeup.com/profile/789betvip</span></a></p>\n<p><a href=\"https://savelist.co/profile/users/789betvip\"><span style=\"\">https://savelist.co/profile/users/789betvip</span></a></p>\n<p><a href=\"https://scholar.google.com/citations?view_op=list_works&amp;hl=en&amp;user=kDRS26MAAAAJ\"><span style=\"\">https://scholar.google.com/citations?view_op=list_works&amp;hl=en&amp;user=kDRS26MAAAAJ</span></a></p>\n<p><a href=\"https://brain-market.com/profiles/my_articles\"><span style=\"\">https://brain-market.com/profiles/my_articles</span></a></p>\n<p><a href=\"http://vintagemachinery.org/members/detail.aspx?id=163332\"><span style=\"\">http://vintagemachinery.org/members/detail.aspx?id=163332</span></a></p>\n<p><a href=\"https://staroetv.su/go?https://789betvip.black/\"><span style=\"\">https://staroetv.su/go?https://789betvip.black/</span></a></p>\n<p><a href=\"https://www.passes.com/789betvip\"><span style=\"\">https://www.passes.com/789betvip</span></a></p>\n<p><a href=\"https://www.nintendo-master.com/profil/789betvip\"><span style=\"\">https://www.nintendo-master.com/profil/789betvip</span></a></p>\n<p><a href=\"https://www.openrec.tv/user/s4c15mku5eo2grbr1n2h/about\"><span style=\"\">https://www.openrec.tv/user/s4c15mku5eo2grbr1n2h/about</span></a></p>\n<p><a href=\"https://newspicks.com/user/12326748/\"><span style=\"\">https://newspicks.com/user/12326748/</span></a></p>\n<p><a href=\"https://blooder.net/789betvip\"><span style=\"\">https://blooder.net/789betvip</span></a></p>\n<p><a href=\"https://chatterchat.com/789betvip\"><span style=\"\">https://chatterchat.com/789betvip</span></a></p>\n<p><a href=\"https://gitlab.vuhdo.io/789betvip\"><span style=\"\">https://gitlab.vuhdo.io/789betvip</span></a></p>\n<p><a href=\"https://matters.town/@789betvip\"><span style=\"\">https://matters.town/@789betvip</span></a></p>\n<p><a href=\"https://sfx.thelazy.net/users/u/789betvip/\"><span style=\"\">https://sfx.thelazy.net/users/u/789betvip/</span></a></p>\n<p><a href=\"https://jobs.suncommunitynews.com/profiles/8087961-trang-ch-789bet\"><span style=\"\">https://jobs.suncommunitynews.com/profiles/8087961-trang-ch-789bet</span></a></p>\n<p><a href=\"https://jobs.westerncity.com/profiles/8087962-trang-ch-789bet\"><span style=\"\">https://jobs.westerncity.com/profiles/8087962-trang-ch-789bet</span></a></p>\n<p><a href=\"https://tempel.in/view/OueBPnPH\"><span style=\"\">https://tempel.in/view/OueBPnPH</span></a></p>\n<p><a href=\"https://www.grepmed.com/789betvip\"><span style=\"\">https://www.grepmed.com/789betvip</span></a></p>\n<p><a href=\"https://forum.dmec.vn/index.php?members/789betvip1.183011/\"><span style=\"\">https://forum.dmec.vn/index.php?members/789betvip1.183011/</span></a></p>\n<p><a href=\"https://ncnews.co/profile/789betvip\"><span style=\"\">https://ncnews.co/profile/789betvip</span></a></p>\n<p><a href=\"https://www.trackyserver.com/profile/239949\"><span style=\"\">https://www.trackyserver.com/profile/239949</span></a></p>\n<p><a href=\"https://www.adpost.com/u/789betvip/\"><span style=\"\">https://www.adpost.com/u/789betvip/</span></a></p>\n<p><a href=\"https://expathealthseoul.com/profile/789betvip/\"><span style=\"\">https://expathealthseoul.com/profile/789betvip/</span></a></p>\n<p><a href=\"https://www.pcmasters.de/forum/members/789betvip.120427/#about\"><span style=\"\">https://www.pcmasters.de/forum/members/789betvip.120427/#about</span></a></p>\n<p><a href=\"https://lib39.ru/forum/index.php?PAGE_NAME=profile_view&amp;UID=101989\"><span style=\"\">https://lib39.ru/forum/index.php?PAGE_NAME=profile_view&amp;UID=101989</span></a></p>\n<p><a href=\"https://www.intensedebate.com/people/789betvip1\"><span style=\"\">https://www.intensedebate.com/people/789betvip1</span></a></p>\n<p><a href=\"https://maxforlive.com/profile/user/789betvip?tab=about\"><span style=\"\">https://maxforlive.com/profile/user/789betvip?tab=about</span></a></p>\n<p><a href=\"https://tooter.in/789betvip\"><span style=\"\">https://tooter.in/789betvip</span></a></p>\n<p><a href=\"https://www.tizmos.com/789betvip?folder=Home\"><span style=\"\">https://www.tizmos.com/789betvip?folder=Home</span></a></p>\n<p><a href=\"https://app.hellothematic.com/creator/profile/1135866\"><span style=\"\">https://app.hellothematic.com/creator/profile/1135866</span></a></p>\n<p><a href=\"https://f319.com/members/789betvip.1086543/\"><span style=\"\">https://f319.com/members/789betvip.1086543/</span></a></p>\n<p><a href=\"https://blender.community/trang_chu882/\"><span style=\"\">https://blender.community/trang_chu882/</span></a></p>\n<p><a href=\"https://www.weddingvendors.com/directory/profile/37669/\"><span style=\"\">https://www.weddingvendors.com/directory/profile/37669/</span></a></p>\n<p><a href=\"https://www.databaze-her.cz/uzivatele/789betvip/\"><span style=\"\">https://www.databaze-her.cz/uzivatele/789betvip/</span></a></p>\n<p><a href=\"https://www.facer.io/u/789betvip\"><span style=\"\">https://www.facer.io/u/789betvip</span></a></p>\n<p><a href=\"https://www.bmwpower.lv/profile.php\"><span style=\"\">https://www.bmwpower.lv/profile.php</span></a></p>\n<p><a href=\"https://topsitenet.com/profile/789betvip/1568473/\"><span style=\"\">https://topsitenet.com/profile/789betvip/1568473/</span></a></p>\n<p><a href=\"https://velog.io/@789betvip1/about\"><span style=\"\">https://velog.io/@789betvip1/about</span></a></p>\n<p><a href=\"https://linkstack.lgbt/@789betvip1\"><span style=\"\">https://linkstack.lgbt/@789betvip1</span></a></p>\n<p><a href=\"https://giphy.com/channel/789betvip1\"><span style=\"\">https://giphy.com/channel/789betvip1</span></a></p>\n<p><a href=\"https://matkafasi.com/user/789betvip\"><span style=\"\">https://matkafasi.com/user/789betvip</span></a></p>\n<p><a href=\"https://vc.ru/id5842635\"><span style=\"\">https://vc.ru/id5842635</span></a></p>\n<p><a href=\"https://joinentre.com/profile/789betvip\"><span style=\"\">https://joinentre.com/profile/789betvip</span></a></p>\n<p><a href=\"http://www.orangepi.org/orangepibbsen/home.php?mod=space&amp;uid=6306007\"><span style=\"\">http://www.orangepi.org/orangepibbsen/home.php?mod=space&amp;uid=6306007</span></a></p>\n<p><a href=\"https://freeicons.io/profile/911754\"><span style=\"\">https://freeicons.io/profile/911754</span></a></p>\n<p><a href=\"https://talkmarkets.com/profile/donoghuehebda-260331-092620\"><span style=\"\">https://talkmarkets.com/profile/donoghuehebda-260331-092620</span></a></p>\n<p><a href=\"https://log.concept2.com/profile/2890538\"><span style=\"\">https://log.concept2.com/profile/2890538</span></a></p>\n<p><a href=\"https://postr.yruz.one/profile/789betvip\"><span style=\"\">https://postr.yruz.one/profile/789betvip</span></a></p>\n<p><a href=\"https://estar.jp/users/2015813362\"><span style=\"\">https://estar.jp/users/2015813362</span></a></p>\n<p><a href=\"https://able2know.org/user/789betvip1/\"><span style=\"\">https://able2know.org/user/789betvip1/</span></a></p>\n<p><a href=\"https://www.ganjingworld.com/vi-VN/channel/1ie9v2ro752FtCBEkfr1OT6JU1br0c?subTab=all&amp;tab=about&amp;subtabshowing=latest&amp;q=\"><span style=\"\">https://www.ganjingworld.com/vi-VN/channel/1ie9v2ro752FtCBEkfr1OT6JU1br0c?subTab=all&amp;tab=about&amp;subtabshowing=latest&amp;q=</span></a></p>\n<p><a href=\"http://www.biblesupport.com/user/824805-789betvip/\"><span style=\"\">http://www.biblesupport.com/user/824805-789betvip/</span></a></p>\n<p><a href=\"https://app.scholasticahq.com/scholars/517884-trang-ch-789bet\"><span style=\"\">https://app.scholasticahq.com/scholars/517884-trang-ch-789bet</span></a></p>\n<p><a href=\"https://devfolio.co/@789betvip\"><span style=\"\">https://devfolio.co/@789betvip</span></a></p>\n<p><a href=\"https://myget.org/users/789betvip\"><span style=\"\">https://myget.org/users/789betvip</span></a></p>\n<p><a href=\"https://caramel.la/home/E6xVfxdJv/Untitled?pub=first\"><span style=\"\">https://caramel.la/home/E6xVfxdJv/Untitled?pub=first</span></a></p>\n<p><a href=\"https://www.bloggportalen.se/BlogPortal/view/ReportBlog?id=297032\"><span style=\"\">https://www.bloggportalen.se/BlogPortal/view/ReportBlog?id=297032</span></a></p>\n<p><a href=\"https://photohito.com/user/profile/228464/\"><span style=\"\">https://photohito.com/user/profile/228464/</span></a></p>\n<p><a href=\"https://manylink.co/@789betvip\"><span style=\"\">https://manylink.co/@789betvip</span></a></p>\n<p><a href=\"https://dev.muvizu.com/Profile/789betvip/Latest\"><span style=\"\">https://dev.muvizu.com/Profile/789betvip/Latest</span></a></p>\n<p><a href=\"https://zumvu.com/789betvip/\"><span style=\"\">https://zumvu.com/789betvip/</span></a></p>\n<p><a href=\"https://definedictionarymeaning.com/user/trang-ch%E1%BB%A7-789bet\"><span style=\"\">https://definedictionarymeaning.com/user/trang-chủ-789bet</span></a></p>\n<p><a href=\"https://cdn.muvizu.com/Profile/789betvip/Latest\"><span style=\"\">https://cdn.muvizu.com/Profile/789betvip/Latest</span></a></p>\n<p><a href=\"https://openlibrary.org/people/789betvip1\"><span style=\"\">https://openlibrary.org/people/789betvip1</span></a></p>\n<p><a href=\"https://archive.org/details/@789betvip1\"><span style=\"\">https://archive.org/details/@789betvip1</span></a></p>\n<p><a href=\"https://www.buzzbii.com/789betvip\"><span style=\"\">https://www.buzzbii.com/789betvip</span></a></p>\n<p><a href=\"https://www.bookingblog.com/forum/users/789betvip/\"><span style=\"\">https://www.bookingblog.com/forum/users/789betvip/</span></a></p>\n<p><a href=\"https://dialog.eslov.se/profiles/789betvip/activity?locale=en\"><span style=\"\">https://dialog.eslov.se/profiles/789betvip/activity?locale=en</span></a></p>\n<p><a href=\"https://forum.skullgirlsmobile.com/members/789betvip.208161/#about\"><span style=\"\">https://forum.skullgirlsmobile.com/members/789betvip.208161/#about</span></a></p>\n<p><a href=\"https://talk.plesk.com/members/btip1.497896/#about\"><span style=\"\">https://talk.plesk.com/members/btip1.497896/#about</span></a></p>\n<p><a href=\"https://www.shippingexplorer.net/en/user/789betvipblack1/270580\"><span style=\"\">https://www.shippingexplorer.net/en/user/789betvipblack1/270580</span></a></p>\n<p><a href=\"https://fileforums.com/member.php?u=297480\"><span style=\"\">https://fileforums.com/member.php?u=297480</span></a></p>\n<p><a href=\"https://photozou.jp/user/top/3447046\"><span style=\"\">https://photozou.jp/user/top/3447046</span></a></p>\n<p><a href=\"https://taittsuu.com/users/789betvipblack1\"><span style=\"\">https://taittsuu.com/users/789betvipblack1</span></a></p>\n<p><a href=\"https://savee.com/789betvipblack1/\"><span style=\"\">https://savee.com/789betvipblack1/</span></a></p>\n<p><a href=\"https://code.antopie.org/789betvipblack1\"><span style=\"\">https://code.antopie.org/789betvipblack1</span></a></p>\n<p><a href=\"https://pxhere.com/en/photographer/4964664\"><span style=\"\">https://pxhere.com/en/photographer/4964664</span></a></p>\n<p><a href=\"https://gesoten.com/profile/detail/12684858\"><span style=\"\">https://gesoten.com/profile/detail/12684858</span></a></p>\n<p><a href=\"https://connect.gt/user/789betvipblack1\"><span style=\"\">https://connect.gt/user/789betvipblack1</span></a></p>\n<p><a href=\"https://app.readthedocs.org/profiles/789betvipblack1/\"><span style=\"\">https://app.readthedocs.org/profiles/789betvipblack1/</span></a></p>\n<p><a href=\"https://participa.aytojaen.es/profiles/789betvipblack1/activity\"><span style=\"\">https://participa.aytojaen.es/profiles/789betvipblack1/activity</span></a></p>\n<p><a href=\"https://participation.bordeaux.fr/profiles/789betvipblack1/activity\"><span style=\"\">https://participation.bordeaux.fr/profiles/789betvipblack1/activity</span></a></p>\n<p><a href=\"https://participer.valdemarne.fr/profiles/789betvipblack1/activity\"><span style=\"\">https://participer.valdemarne.fr/profiles/789betvipblack1/activity</span></a></p>\n<p><a href=\"https://entre-vos-mains.alsace.eu/profiles/789betvipblack1/activity\"><span style=\"\">https://entre-vos-mains.alsace.eu/profiles/789betvipblack1/activity</span></a></p>\n<p><a href=\"https://jobs.siliconflorist.com/employers/4087550-789betvipblack1\"><span style=\"\">https://jobs.siliconflorist.com/employers/4087550-789betvipblack1</span></a></p>\n<p><a href=\"https://letterboxd.com/789betvipblack1/\"><span style=\"\">https://letterboxd.com/789betvipblack1/</span></a></p>\n<p><a href=\"https://routinehub.co/user/789betvipblack1\"><span style=\"\">https://routinehub.co/user/789betvipblack1</span></a></p>\n<p><a href=\"https://zimexapp.co.zw/789betvipblack1\"><span style=\"\">https://zimexapp.co.zw/789betvipblack1</span></a></p>\n<p><a href=\"https://cointr.ee/789betvipblack1\"><span style=\"\">https://cointr.ee/789betvipblack1</span></a></p>\n<p><a href=\"https://zrzutka.pl/profile/789betvipblack1-315884\"><span style=\"\">https://zrzutka.pl/profile/789betvipblack1-315884</span></a></p>\n<p><a href=\"https://civitai.com/user/789betvipblack1\"><span style=\"\">https://civitai.com/user/789betvipblack1</span></a></p>\n<p><a href=\"https://rotorbuilds.com/profile/209628/\"><span style=\"\">https://rotorbuilds.com/profile/209628/</span></a></p>\n<p><a href=\"https://pixelfed.uno/789betvipblack1\"><span style=\"\">https://pixelfed.uno/789betvipblack1</span></a></p>\n<p><a href=\"https://3dlancer.net/profile/u114171\"><span style=\"\">https://3dlancer.net/profile/u114171</span></a></p>\n<p><a href=\"https://findpenguins.com/789betvipblack1\"><span style=\"\">https://findpenguins.com/789betvipblack1</span></a></p>\n<p><a href=\"https://www.jointcorners.com/789betvipblack1\"><span style=\"\">https://www.jointcorners.com/789betvipblack1</span></a></p>\n<p><a href=\"https://naijamatta.com/789betvipblack1\"><span style=\"\">https://naijamatta.com/789betvipblack1</span></a></p>\n<p><a href=\"https://www.elephantjournal.com/profile/donoghuehebda/\"><span style=\"\">https://www.elephantjournal.com/profile/donoghuehebda/</span></a></p>\n<p><a href=\"https://medibang.com/author/28076034/\"><span style=\"\">https://medibang.com/author/28076034/</span></a></p>\n<p><a href=\"https://audio.com/789betvipblack1\"><span style=\"\">https://audio.com/789betvipblack1</span></a></p>\n<p><a href=\"https://cinderella.pro/user/23430/789betvipblack1/\"><span style=\"\">https://cinderella.pro/user/23430/789betvipblack1/</span></a></p>\n<p><a href=\"https://forums.maxperformanceinc.com/forums/member.php?u=243811\"><span style=\"\">https://forums.maxperformanceinc.com/forums/member.php?u=243811</span></a></p>\n<p><a href=\"https://forum.aigato.vn/user/789betvipblack1\"><span style=\"\">https://forum.aigato.vn/user/789betvipblack1</span></a></p>\n<p><a href=\"http://www.genina.com/user/editDone/5252460.page\"><span style=\"\">http://www.genina.com/user/editDone/5252460.page</span></a></p>\n<p><a href=\"https://malt-orden.info/userinfo.php?uid=453504\"><span style=\"\">https://malt-orden.info/userinfo.php?uid=453504</span></a></p>\n<p><a href=\"https://www.iglinks.io/donoghuehebda-pti?preview=true\"><span style=\"\">https://www.iglinks.io/donoghuehebda-pti?preview=true</span></a></p>\n<p><a href=\"https://heylink.me/donoghuehebda/\"><span style=\"\">https://heylink.me/donoghuehebda/</span></a></p>\n<p><a href=\"https://www.hostboard.com/forums/members/789betvipblack1.html\"><span style=\"\">https://www.hostboard.com/forums/members/789betvipblack1.html</span></a></p>\n<p><a href=\"https://infiniteabundance.mn.co/members/39072249\"><span style=\"\">https://infiniteabundance.mn.co/members/39072249</span></a></p>\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 166261,
            "forum_user": {
                "id": 166025,
                "user": 166261,
                "first_name": "Trang Chủ",
                "last_name": "789Bet",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6364f0a3cee99a8f5517b1b5ad37ba19?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-31T14:17:40.685372+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "789betvip",
            "first_name": "Trang Chủ",
            "last_name": "789Bet",
            "bookmarks": []
        },
        "slug": "789bet-trang-chu-ang-nhap-789betcom-uy-tin-nhat-2026",
        "pk": 4562,
        "published": false,
        "publish_date": "2026-03-31T14:22:17.781554+02:00"
    },
    {
        "title": "The Guitar as a Spatialization Device “Gesture, Musical Material, and Spatial Audio Technology\" by Natán Ide",
        "description": "This project\r\nexplores the electric guitar as a spatialization device through a research-creation process. A custom system integrates gesture, spatial sound processing, and performance, forming an experimental platform where body,music, and space interact in real time. The work investigates how spatial audio and corporeality reshape stage presence and perception. Drawing from intercorporeality, spatial listening, and compositional, spatiality, it proposes a sensitive, expressive performance environment.",
        "content": "<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3237,
                "name": "electric guitar",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1794,
                "name": "Embodied Music Cognition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3236,
                "name": "gesture control",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 3235,
                "name": "spatian audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 82434,
            "forum_user": {
                "id": 82334,
                "user": 82434,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2025-09-02_at_12.31.12.jpeg",
                "avatar_url": "/media/cache/55/ac/55ac6363900d0934dc927eb7c8787205.jpg",
                "biography": "Natán Ide is an acoustical engineer, composer, and performer. He holds a Master’s degree in Acoustics and Vibrations from the Universidad Austral de Chile and is currently pursuing a Master’s in Arts, with a specialization in Music, at the Pontificia Universidad Católica de Chile. His work lies at the intersection of art and technology, focusing on spatial audio, gesture, and musical performance. From a research-creation perspective, he explores how sound spatialization, gestural control, and corporeality can expand the expressive possibilities of the performer in real time.\n\nHis main instrument is the electric guitar, which he approaches from an expanded perspective that integrates sound spatialization technologies and gestural control. He has trained within the framework of Guitar Craft, a practice that has deeply influenced his approach to the instrument, the body, and listening. He currently teaches at the Faculty of Architecture and Arts at the Universidad Austral de Chile, where he combines his technical and artistic expertise in the education of new creators.",
                "date_modified": "2025-09-25T11:33:55.204289+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nidep",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-guitar-as-a-spatialization-device-gesture-musical-material-and-spatial-audio-technology",
        "pk": 3595,
        "published": true,
        "publish_date": "2025-08-02T19:44:43+02:00"
    },
    {
        "title": "A garden of sensory delights - Ben Koppelman, Alexandra Topaz",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p><span>&lsquo;Pairi-daēza&rsquo; means &lsquo;walled garden&rsquo; in Persian, and is the origin of the Hebrew word for orchard, &lsquo;pardes&rsquo;, and the English word for paradise.&nbsp;</span><span>Pairi-daēza is also related to &lsquo;parigauda&rsquo;, meaning a screen, which is the root of the Hebrew, &lsquo;pargod&rsquo; - a key concept in Jewish mysticism that describes our veiled relationship to immanence.&nbsp;</span><span>This presentation discusses a proof of concept installation that </span>tells a story of mystical transformation inspired by Hieronymus Bosch&rsquo;s uncanny&nbsp;<em><span>Garden of Earthly Delights</span></em><span>.&nbsp;</span></p>\r\n<p><span>Holography is a powerful medium to represent yearnings beyond material presence to experience something more immaterial. This installation </span><span>explores holography as an augmented reality to allow people to be immersed in physical and virtual worlds simultaneously while feeling equally present in both. </span><span>Using projections and Holotronica&rsquo;s <em>Hologauze</em> to create visual holograms, it also explores the potential of audio holograms by combining headphone theatre and spatial sound to create even more immersive experiences.</span></p>",
        "topics": [
            {
                "id": 1190,
                "name": " augmented reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1188,
                "name": " headphone theatre",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1187,
                "name": "holograms",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1189,
                "name": " mystical",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1191,
                "name": " spatial sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32875,
            "forum_user": {
                "id": 32827,
                "user": 32875,
                "first_name": "Ben",
                "last_name": "Koppelman",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/26cd7411aa553687014c4ff1f7cc7aaf?s=120&d=retro",
                "biography": "Ben is currently studying at the Royal College of Art where he is a sound pathway student on the Information Experience Design MA. His studies have been exploring psychoacoustics and spatial sound, as well as storytelling  since his practice includes flash fiction and spoken word. Ben is also leading research at Kimatica Studio on inducing states of transcendence and performance design. Ben’s interests draw on previous studies in philosophy (of psychology) at King's College London and philosophy (of science) at Cambridge University, as well as theology studies in Israel.",
                "date_modified": "2025-07-25T12:10:30.496619+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 962,
                        "forum_user": 32827,
                        "date_start": "2023-06-12",
                        "date_end": "2025-10-16",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "benk",
            "first_name": "Ben",
            "last_name": "Koppelman",
            "bookmarks": []
        },
        "slug": "a-garden-of-sensory-delights",
        "pk": 2099,
        "published": true,
        "publish_date": "2023-02-28T19:17:31+01:00"
    },
    {
        "title": "DAMUS: A Collaborative System For Choreography And Music Composition",
        "description": "Presented during the IRCAM Forum @NYU 2022\r\n\r\nThroughout the history of dance and music collaborations, composers and choreographers have always engaged in separate workflows. Usually, composers and choreographers complete the music and choreograph the moves separately, where the lack of mutual understanding of their artistic approaches results in a long production time. There is a strong need in the performance industry to reduce the time for establishing a collaborative foundation, allowing for more productive creations.\r\nWe propose DAMUS, a work-in-progress collaborative system for choreography and music composition, in order to reduce production time and boost productivity. DAMUS is composed of a dance module DA and a music module MUS. DA translates dance motion into MoCap data,\r\nLabanotation, and number notation, and sets rules of variations for choreography. MUS produces musical materials that fit the tempo and rhythm of specific dance genres or moves.\r\nWe applied our system prototype to case studies in three different genres. In the future, we plan to pursue more genres and further develop DAMUS with evolutionary computation and style transfer.",
        "content": "<h5><span>Authors of the article: <a href=\"https://forum.ircam.fr/profile/tiangezhou/\">Tiange Zhou</a>&nbsp;(<a href=\"mailto:tiangezhoumusic@gmail.com\">tiangezhoumusic@gmail.com</a>), <a href=\"https://forum.ircam.fr/profile/anna-yu-aya-yale-edu/\">Anna Borou Yu</a>&nbsp;(<a href=\"mailto:anna.yu@aya.yale.edu\">anna.yu@aya.yale.edu</a>), <a href=\"https://zachzeyuwang.github.io/\">Zeyu Wang</a>&nbsp;(<a href=\"mailto:zeyuwang@ust.hk\">zeyuwang@ust.hk</a>),&nbsp;<a href=\"https://forum.ircam.fr/profile/jiajianmin/\">Jiajian Min</a>&nbsp;(<a href=\"mailto:jiajian.min@aya.yale.edu\">jiajian.min@aya.yale.edu</a>)</span></h5>\r\n<div></div>\r\n<div></div>\r\n<h3><span>1. INTRODUCTION</span></h3>\r\n<p><span>We present a work-in-progress project named DAMUS, a collaborative modular and data-driven system that aims to algorithmically support the composers and choreographers to generate&nbsp;</span><span>original and diverse development for their creative variations and continuities. The project name DAMUS means &ldquo;we offer&rdquo; in Latin. Meanwhile, it is also a combined terminology with DA, the dance module, and MUS, the music module. </span></p>\r\n<p><span>Conventional dance-music collaborations regularly&nbsp;</span><span>take a significant amount of time for collaborators to adapt to one other&rsquo;s creative languages; occasionally, collaborations can become overly exclusive, resulting in collaborations between only certain artistic groups. Nowadays, fastpaced productions and more inclusive creative collaboration environments necessitate an efficient solution that can preserve a significant amount of artistic authenticity while also facilitating rapid &ldquo;brainstorming.&rdquo; DAMUS, the compound collaborative authoring system, aims to build a collaborative foundation for choreographers and composers through algorithms. Using DAMUS will reduce time consumption on communication and motivate dynamic expressions by treating their complete or scattered creative ideas as preliminary units. </span></p>\r\n<p><span>We leverage machine learning, evolutionary computation, and creative constraints to produce dance and music variations either&nbsp;</span><span>for a single user or for multiple users, where they can interact with each other and express themselves dynamically. We will present details of our system design, data collection, the DAMUS components (the dance module and the music module), as well as the underlying logic for each. Furthermore, we would like to share our preliminary creative outputs through case studies involving three distinct dance genres: ballet, modern dance, and Chinese Tang Dynasty dance.</span></p>\r\n<p>&nbsp;</p>\r\n<h3><span>2. RELATED WORK</span></h3>\r\n<p><strong>Choreography.</strong><span> Many have explored how algorithms and machine learning could engage with choreography. Since the 1960s, Merce Cunningham applied new computer software and motion capture technology to choreograph dance in a brand new way [1]. The experimental work by Michael Noll also generated basic choreographic sequences from computers [2]. Some scholars studied choreography from a semiotic aspect and invented dance notations that maintain the authenticity of choreography, which could be further processed into dance learning, practice and research, e.g. Labanotation, Benesh Notation, and Eshkol Wachman Notation. For example,&nbsp;</span><span>the Microsoft Labanotation Suite [3] can perform translation between MoCap skeleton and Labanotation, which enables robots to learn to dance from observation.</span></p>\r\n<p><strong>Music Composition. </strong><span>Algorithmic composition has had a long history for hundreds of years [4]. With the development of technologies used in the art field, people have started to apply more advanced programming methods to music composition to support creative purposes. For example, the HPSCHD system established by computer scientist Lajaren Hiller for composer John Cage, which is influenced by chaos theory, assists in making music elements move away&nbsp;</span><span>from unity [5]. Another example is OpenMusic, a visual programming language for computer-assisted music creation developed by Tristan Murail and his team to support the development of spectral-based calculations in music composition. It enables interesting mathematical algorithms to provide fascinating sonic outcomes [6]. The third example is MaxMSP, which provides basic Markov Models and generative grammars for music creators to generate their ideas.&nbsp;</span></p>\r\n<p><strong>Dance-Music Interaction.</strong><span> Recent advances in algorithms have also enabled dance synthesis from music data. For example, by analyzing the beats and rhythms in a video, researchers can create or manipulate the appearance of dance for better synchronization [7, 8]. State-of-the-art datasets like AIST++ and deep learning architecture like transformers also pushed the boundaries of dance synthesis, producing realistic dance motions matching the input music [9, 10].</span></p>\r\n<p>&nbsp;</p>\r\n<h3><span>3. SYSTEM DESIGN</span></h3>\r\n<p><span>DAMUS is based on the corresponding relationship between dance and music: space and frequency, time and duration, weight and velocity, flow and effect (Fig. 1).</span></p>\r\n<p><span><img src=\"/media/uploads/fig._2._system_design_of_damus.png\" alt=\"\" width=\"1212\" height=\"399\" /></span></p>\r\n<p><span>The inputs of symbols and notes will be analyzed as creative constraints. Based on motion pattern selection and mapping pattern execution, with further evolutionary computation and style transferring, DAMUS will generate new variations of dance and music pieces as a foundation for collaboration. Artists could then manually select the ones they want (Fig. 2).</span></p>\r\n<p><span>This system can be jointly used by multiple users, or by a single user and another media resource in the system.</span></p>\r\n<p>&nbsp;</p>\r\n<h3><span>4. METHODOLOGY</span></h3>\r\n<p>&nbsp;</p>\r\n<p><span>4.1. Dance Module</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.1.1. Encoding Relationships</span></p>\r\n<p><span>The first step is to translate a human figure to an abstract notation system. We can get an animated skeleton from the MoCap of human figure dancing, and map it to Labanotation through Microsoft Labanotation Suite, where we further highlight 13 body parts and joints. This process could also be done by a trained dance notator, and scholars have created a bunch of Labanotation documents on performances in the past century. In Labanotation, each body part is drawn onto a specific column, and we can set an encoding execution to create a number notation sheet, noting each of the body parts at a specific time (Fig. 3).</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.1.2. Algorithm-Based Re-Choreography As the performance (musical) piece could be cut into paragraph, sentence, bar, beat, timecode (e.g., 1/2 beat), and the dance notation and music notation aligns timely, we decompose the Labanotation into timecodes with specific composition&nbsp;</span><span>of body part condition. For simplicity, we include 27 still directions and 16 turns. In this way, the composition of the condition of each body part at each timecode is written as a series of 13 numbers with a 2-digit number. This could be transformed into a number notation sheet, a sequence of 13-dimension vector for machine learning, or a sequence of point clouds for visualization (Figs. 4 and 5).</span></p>\r\n<p><span>For a specific piece of performance and based on the expression of symbolic Labanotation and its encoding, we could find some patterns of motion elements as P1, P2, ..., Pm, and a series of codes of alternation could be generated as M1,M2, ...,Mn. Some simple M&rsquo;s come to our mind: mirroring, alternating the step combination when the upper body shares the pattern, alternating the arms when the step combination shares the pattern. Derived from comprehensive dance research and manually picked P&rsquo;s and M&rsquo;s, we will develop algorithm to generate more M&rsquo;s, i.e., keeping the same motif with change in speed and rhythm. At the same time, algorithm will make the decision on the allocation of various mappings applied to the patterns.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.1.3. Visualization of the Re-Choreography</span></p>\r\n<p><span>After the variation process is applied, we need to find out a strategy to visualize the re-choreography. One way is to apply the reverse process from symbolic Labanotation to number notation sheet in Fig. 3. A trained dance notator or scholar could read the Labanotation and dance it right out. Another way is to communicate our algorithm with the Microsoft Labanotation Suite, through which the variated Labanotation could be translated into an animated skeleton or character, and a dancer could follow the animation and practice.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.1.4. Next Steps</span></p>\r\n<p><span>In a nutshell, the strategy of working with a single piece involves finding motion patterns (P&rsquo;s) in symbolic Labanotation as well as number notation sheets, and setting mapping patterns of choreography alternation (M&rsquo;s). When working with a series of performance pieces of similar style, era or creators, the patterns of motions and mappings could be collected together as a dataset, and a compounded algorithm could be applied to select and create new codes from the pool.&nbsp;</span></p>\r\n<p><span>We will also add more conditions and randomness to the algorithm for a better computational outcome. Also, solid dance research on specific motion patterns would also benefit the style maintenance during the process of mapping variations. Further on, we consider inviting the original creator or dance groups of the pieces to test the alternated choreography, and iterate afterwards.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.2. Music Module</span></p>\r\n<p><span>In traditional dance music collaborations, choreographers frequently share with the composer specific pieces of music that they have previously used as sources of inspiration for future compositions. This method, however, is frequently unproductive, particularly in new creative teams or when the composer lacks experience producing music for dance. It is also difficult for the composer to rapidly grasp the choreographer&rsquo;s vision, and the resulting composition commonly deviates significantly from the dance&rsquo;s required rhythms. If we would like to solve this issue, we need to make sure the music module can produce musical materials that fit the tempo and rhythm of specific dance genres or moves. Therefore, the first step in developing our dance and music database has been to assess the amount of music that has its own fixed music structure and rules for specific dance genres. For example, in the history of ballet performances, baroque suite music and concerto music have frequently been used in the productions. As a result, we analyzed a large number of pieces in this genre by J.S. Bach, George Frideric Handel, and G. Philipp Telemann, the three most famous baroque composers, who also had the largest num.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.2.1. Digitization</span></p>\r\n<p><span>The entire tempo and rhythmic model extraction process of all different music genres at this stage, starts with digitizing the score, exporting it to a midi file, transferring the midi file to a JS file, and finally extracting when and how specific music events occur over a set period of time (Fig. 6). When we come across musical pieces that we already have in our midi file collections, the progress could then begin with step three. It does not mean, the music models have to be genre specific, creators could input any possible inspiring musical score as midi or XML files into this system and analyze it as an influential element for the next step.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.2.2. Extrapolation and Variation</span></p>\r\n<p><span>Moreover, to make use of these influential tempo and rhythmic models the module has extracted earlier from the original sound files, the composer could either use the hidden Markov&nbsp;</span><span>method to replace the original pitches with similar baroque music melodies, use a random pitch generator, or morph different musical pitch characters, such as installing jazz music pitches into a baroque music tempo and rhythmic models or other combinations to create fusion outcomes. This is an extrapolation progress with plenty of possibilities. After this part, the module aims reversely transferring the data from the programming stage back to the midi files and musical scores, so the creator could use the materials directly without in-depth coding experiences.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>4.2.3. Next Steps</span></p>\r\n<p><span>Additionally, because the entire compound system is not intended to supplant the authors&rsquo; originality, but rather to assist their needs. The next steps in our working process for the music module will be to provide as many features as possible that allow the composer to edit and influence the outcomes by adding or changing specific sound units, or by constraining the algorithm with specific criteria. For example, setting a specific key signature and pace, quantizing according to time signatures, orchestrating in specific groups of instruments, setting up basic harmonic progressions, and most importantly, relating dance movements to synchronized or nonsynchronized musical contours.</span></p>\r\n<p>&nbsp;</p>\r\n<h3><span>5. CASE STUDIES</span></h3>\r\n<p><span>5.1. Database Supported Variations</span></p>\r\n<p><span>The team&rsquo;s composer, choreographer, designer, and computer scientist have been testing the DAMUS system and trying to come up with new variables as they research. We would like to share two case studies for two different collaborative creation challenges. The first challenge that choreographer and composer team has frequently encountered is the purpose to create a new piece together which is highly referring to the existing data. Therefore, we look at two existing sets of data from our database: a complete music score, a dance score, and a performance video in the ballet and modern dance genres.</span></p>\r\n<p><span>These two works both employ Baroque German composer J.S. Bach&rsquo;s compositions as music, associating with early 20th century American dancer and choreographer Doris Humphrey&rsquo;s choreography as the dance part. We then try Fig. 7. Corresponding patterns in Air on the G String. to make a different dance-music combination from the input data with a consistent pace and rhythmic model through several stages. For the music part, firstly, we extract the model as it has been addressed earlier. Secondly, we replace the original pitches with the random pitch generator among the midi notes from 1 to 127 and generate three musical pieces, which we call them random A, random B, and random C. It turns out a set of quite interesting results. Since the music is clearly atonal, the musical outcomes have been distinguishable from the original scores even though they are still two musical compositions in a rapid 3/4 pace and an elongate 4/4 pace. We believe they could be quite useful for very experimental creators, but for those who would like the musical materials close to their sources, it could be a bit problematic.</span></p>\r\n<p><span>Therefore, thirdly, we replace the original pitches with the new set-up pitches that are generated from the original scores by the hidden Markov chain as Markov A, Markov B and Markov C. Additionally, we have set several counterpoint restrictions to avoid two pitches simultaneously happen in minor second, major second, minor seventh, and major seventh intervals, which against the rules of this very specific music genre. Specifically, we set functions to avoid &plusmn;1, 2, 10, 11 midi number combinations happening at the same time.&nbsp;</span></p>\r\n<p><span>For the dance part, the choreographic variation has been through quite different methods. The music and dance of a modern piece are always related in multiple dimensions, where deep analysis could be applied. One way is to keep the related patterns between music and dance in order to maintain the style of the original performance. For example, in Air on the G String, we have discovered several patterns, such as the elongated stretch, repeated rhythm and contour of ascending and descending (Fig. 7).</span></p>\r\n<p><span>Based on the patterns of motion elements, we move to the physical variation of human bodies. Through analysis of symbolic Labanotation and its encoding, we could set mappings of variations following human kinetics and physical constraints. Take the ballet Partita as example, we name the repetitive patterns as P1 = {(bar1, bar2), (bar3, bar4)} and P2 = {(bar9, bar10), (bar11, bar12), (bar13, bar14)} (Fig. 8). Also, P1 and P2 have the same rhythm, suggesting the possibility of mappings between them. Here we define a possible mapping as M1(P1 &harr; P2) = bar2[body] &harr; bar14[body]. Similarly, we find the repetitive pattern P3 = {(bar23, bar24), (bar25, bar26)}, and we could set another mapping to switch the pairs of bars within P3. Since the support remains the same, this mapping can be defined as M2(P3 ⟲) = bars23,24[arm] &harr; bar25,26[arm].</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>5.2. Creative Continuation Based on Original Inputs&nbsp;</span></p>\r\n<p><span>The second challenge that a choreographer and composer team has frequently encountered is to make variations from the raw materials they have already worked on, in order to add another section next to the existing work. In this case study, we take the Chinese Tang Dynasty dance as our genre, and first create an authentic piece without algorithms. This specific dance genre is challenging as it requires in-depth research on the ancient Chinese archives.</span></p>\r\n<p><span>Nan Ge Zi is the ancient Tang Dynasty dance piece we have recovered, derived from a and-drawn textual dance notation discovered in the Mogao Caves in Dunhuang, China, in 1900, then shipped to France by the French Sinologist Paul Eugene Pelliot and now documented in the French National Library. The study of Nan Ge Zi originated in 1930s, and now the prevailing interpretation was completed by Beijing Dance Academy in 1980s. The scholars deciphered the text into a piece of dance, and recorded it in Labanotation.</span></p>\r\n<p><span>During our study, we first re-analyze the archives and renotate the dance in Labanotation based on the most updated research. Then, we create a dance-music piece with its very specific music instrumentation and dance movements to recover this historical dance as authentic as possible. Afterwards, we utilize DAMUS to generate more variations. For the dance part, since we know from the re-analysis that the piece contains eight main motion motifs, which could be elaborated into four phrases as well as 48 bars composed by 144 beats. From professional choreographers&rsquo; perspective, it is reasonable to variate this continued section by changing the positions or recombine the body parts of the beats, bars or phrases with the same motif (e.g., bars 4 and 16 both describe serving wine). The figure shows the comparison between the original and the re-choreography (Fig. 9).&nbsp;</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>For the music part, the dance music in the Tang Dynasty has very specific instrument preferences, such as Pipa, Dizi, and other percussion instruments. Unlike the piano, these instruments have a limited range of pitches and require historical intonation. Therefore, when we work on the replacement of the original pitches, we carefully restrict the pitch limitation of each instrument and make sure their intonation can be either adjusted by the composer inside of the DAMUS or conveniently downloaded as a midi file and edited in other digital audio workstation (DAW) programs that composers are familiar with. So we could maintain the authentic and unified characters between the original and its continuities.</span></p>\r\n<p>&nbsp;</p>\r\n<h3><span>6. CONCLUSION</span></h3>\r\n<p><span>Based on fundamental research in dance and music, we have developed DAMUS, a system prototype for facilitating the collaboration between composers and choreographers through mapping algorithms using symbolic and number notations of dance and music. We have made the first steps on the re-composition and re-choreography methodology for ballet, modern dance, and Chinese Tang Dynasty dance. We will test more genres and further develop DAMUS with evolutionary computation and style transfer to generate new iterations of dance and music pieces as collaborative foundations.</span></p>\r\n<p>&nbsp;</p>\r\n<h3><span>7. REFERENCES</span></h3>\r\n<p><span>[1] Thecla Schiphorst, &ldquo;Merce Cunningham: Making Dances with the Computer,&rdquo; Merce Cunningham: Creative elements, pp. 79&ndash;98, 2013.</span></p>\r\n<p><span>[2] Laura Karreman, &ldquo;The Dance without the Dancer: Writing Dance in Digital Scores,&rdquo; Performance Research, vol. 18, no. 5, pp. 120&ndash;128, 2013.</span></p>\r\n<p><span>[3] Katsushi Ikeuchi, Zhaoyuan Ma, Zengqiang Yan, Shunsuke Kudoh, and Minako Nakamura, &ldquo;Describing Upper-Body Motions Based on Labanotation for Learning-From-Observation Robots,&rdquo; International Journal of Computer Vision, vol. 126, no. 12, 2018.</span></p>\r\n<p><span>[4] Gerhard Nierhaus, Algorithmic Composition: Paradigms of Automated Music Generation, Springer Science &amp; Business Media, 2009.</span></p>\r\n<p><span>[5] Larry Austin, &ldquo;HPSCHD,&rdquo; Computer Music Journal, vol. 28, no. 3, pp. 83&ndash;85, 2004.</span></p>\r\n<p><span>[6] Jean Bresson, Carlos Agon, and G&acute;erard Assayag, &ldquo;OpenMusic: Visual Programming Environment for Music Composition, Analysis and Research,&rdquo; in Proceedings of the 19th ACM international conference on Multimedia, 2011, pp. 743&ndash;746.</span></p>\r\n<p><span>[7] Abe Davis and Maneesh Agrawala, &ldquo;Visual Rhythm and Beat,&rdquo; ACM Transactions on Graphics (TOG), vol. 37, no. 4, Jul 2018.</span></p>\r\n<p><span>[8] Yang Zhou, &ldquo;Adobe MAX Sneaks: Project On the Beat,&rdquo; https://research.adobe.com/video/project-on-the-beat/, 2020.</span></p>\r\n<p><span>[9] Kang Chen, Zhipeng Tan, Jin Lei, Song-Hai Zhang, Yuan-Chen Guo, Weidong Zhang, and Shi-Min Hu, &ldquo;ChoreoMaster: Choreography-Oriented Music-Driven Dance Synthesis,&rdquo; ACM Transactions on Graphics (TOG), vol. 40, no. 4, Jul 2021.</span></p>\r\n<p><span>[10] Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa, &ldquo;AI Choreographer: Music Conditioned 3D Dance Generation with AIST++,&rdquo; in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 13401&ndash;13412.</span></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 840,
                "name": "choreography",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 898,
                "name": "collaborative system",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 897,
                "name": "music composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 30741,
            "forum_user": {
                "id": 30694,
                "user": 30741,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Jiajian_Min_Square.jpg",
                "avatar_url": "/media/cache/ea/db/eadb74ae4fc2dd33ccc2db2c44056e61.jpg",
                "biography": "Jiajian Min is an architect, multimedia artist, and interdisciplinary researcher. He is the Co-Founder and Executive Director of MYStudio in Boston, Project Lead at Harvard University Chinese Art Media Lab, visiting critic at China Academy of Art School of Design & Innovation and China Central Academy of Fine Arts School of Architecture, and Alumni Mentor of Yale University. Jiajian engages in contemporary interpretation of digital heritage, mixed reality spatial design, as well as immersive and interactive experience. \nHis artworks have been featured at Hermes Creative Awards, Lumen Prize Longlist, Ars Electronica Art Gallery, ACM Siggraph Asia Art Gallery, IRCAM FORUM at NYU, Chinagraph, Chengdu Biennale, Beijing Media Art Biennale, Asia Digital Art Exhibition, etc. His research has been published by ACM Siggraph Asia,IEEE AIART Workshop, etc. He has taught workshops and lectured at Yale University, UCSD, and Guangzhou Academy of Fine Arts.",
                "date_modified": "2022-09-06T10:29:46+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jiajianmin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "damus-a-collaborative-system-for-choreography-and-music-composition",
        "pk": 1307,
        "published": true,
        "publish_date": "2022-09-07T08:52:57+02:00"
    },
    {
        "title": "Latest news on ASAP and Partiels projects by Pierre Guillot",
        "description": "The latest developments in the Partiels and ASAP projects will be presented.",
        "content": "<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div></div>\r\n<div>Partiels is an application designed for analysing digital audio files, intended for use by researchers in signal processing, musicologists, composers and sound designers. It offers a dynamic and user-friendly interface for exploring the content and characteristics of sounds.</div>\r\n<div>ASAP - A collection of audio plug-ins that allows sound to be transformed in a creative way.</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p></p>",
        "topics": [],
        "user": {
            "pk": 86096,
            "forum_user": {
                "id": 85993,
                "user": 86096,
                "first_name": "Karin",
                "last_name": "Laenen",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/65d11482a61a673c06dbdcf4cb9d156b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-04T16:45:07.346631+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 944,
                        "forum_user": 85993,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 656,
                                "membership": 944
                            },
                            {
                                "id": 657,
                                "membership": 944
                            },
                            {
                                "id": 846,
                                "membership": 944
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "laenen",
            "first_name": "Karin",
            "last_name": "Laenen",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 86096,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "latest-news-on-asap-and-partiels-projects-by-pierre-guillot-1",
        "pk": 4376,
        "published": true,
        "publish_date": "2026-02-17T13:51:08+01:00"
    },
    {
        "title": "Moving Towards Synchrony",
        "description": "Moving Towards Synchrony is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.",
        "content": "<p>Link to the video presentaton:&nbsp;<br><a href=\"https://vimeo.com/514333273\">https://vimeo.com/514333273&nbsp;</a></p>\n<p><strong>Introduction:</strong></p>\n<p>My name is Johnny Tomasiello and I am a multidisciplinary artist and composer, living and working in New York.<span>&nbsp;</span></p>\n<p>My piece, titled <em>Moving Towards Synchrony, version 3, </em>is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.</p>\n<p>It investigates the neurological effects of modulating brain waves and their corresponding physiological effects by use of a Brain-Computer Music Interface, which allows for the sonification of the data captured by an electroencephalogram.</p>\n<p>The work presents an interactive computer-assisted compositional performance system that can teach participants how to influence a positive change in their own physiology by learning to influence the functions of the autonomic nervous system through neuro- and bidirectional feedback.<span>&nbsp;</span></p>\n<p>The methodology involves collecting physiological data through non invasive neuroimaging. A subject&rsquo;s brainwaves are used to generate realtime interactive music compositions which are simultaneously experienced by that subject. The melodic and rhythmic content, are derived from, and constantly influenced by, the subject&rsquo;s EEG readings. A subject, focusing on the generative stimuli, will attempt to elicit a change in their physiological systems through their experience of the bidirectional feedback. The resulting physiological responses will be recorded and measured to determine the efficacy of using external stimuli to affect the human body both physiologically and psychologically.<br><br>EEG brainwave data has shown high levels of success in classifying mental states [1], which affect &ldquo;autonomic modulation of the cardiovascular system&rdquo; [2], and there are existent studies investigating how music can influence a response in the autonomic nervous system. [3] It is with these phenomena in mind that this work was created.<span>&nbsp;</span></p>\n<p>Increased activity in the alpha wave frequency range is &ldquo;usually associated with alert relaxation&rdquo;. [4] Methods intended to increase activity in the alpha wave frequency range through feedback, autogenic meditation, breathing exercises, and other techniques, is called alpha training.</p>\n<p>Positive changes in alpha is what I am primarily concerned with here, since research has shown that stimulating activity within alpha causes muscle relaxation, pain reduction, breathing rate regulation, and decreased heart rate. [4] [5] [6] It has also been used for reducing stress, anxiety and depression, and can encourage memory improvements, mental performance, and aid in the treatment of brain injuries.</p>\n<p>In addition to investigating these neuroscience concerns, this work is designed to explore the validity of using the scientific method as an artistic process. The methodology will be to create an evidence-based system for the purpose of developing research based projects. This will limit, initially, subjective interpretation of the work and will encourage a mindful and intentional interaction with the experience itself. What is learned will determine the value of the work.</p>\n<p>As Gita Sarabhai expressed to John Cage \"...music conditions one's mind, leading to &lsquo;moments in [one's] life that are complete and fulfilled&rsquo;&rdquo; [5]. Music, in this case, can also be used by the mind to condition one's body.</p>\n<p>&nbsp;</p>\n<p><strong>Information on EEG:</strong></p>\n<p>An electroencephalogram (also know as an EEG) is an electrophysiological monitoring method used to record the electrical activity of the brain. A typical adult human EEG signal is between 10 and 100 &micro;V (microvolts) in amplitude when measured from the scalp. It was invented by German psychiatrist Hans Berger in 1929 and research into how brainwaves can be interpreted and modulated started as shortly thereafter.<span>&nbsp; </span>Using an EEG, you are able to directly measure neural activity and capture cognitive processes in real time. Berger proved that alpha waves (also know as Berger waves) were generated by cerebral cortical neurons.</p>\n<p>In 1934, English physiologists Edgar Adrian and Brain Matthews first described the sonification of alpha waves derived from EEG data. [8] They found that &ldquo;non-visual activities which demand the entire attention (e.g. mental arithmetic) abolish the waves; sensory stimulation which demand attention also do so&rdquo; [9], showing how concentration and thought processes affected activity in the alpha wave frequency range.</p>\n<p>The brain wave activity recorded in an EEG is a summation of the inhibitory and excitatory post synaptic potentials that occur across a neuronal membrane. [10]</p>\n<p>The measurements are taken by way of electrodes placed on the scalp.<span>&nbsp; </span>The readings are&nbsp;divided into five frequency bands, delineating slow, moderate, and fast waves.<span>&nbsp; </span>The bands, from slowest to fastest are:</p>\n<p>&nbsp;</p>\n<p><strong>Delta</strong>, with a range from approximately 0.5Hz&ndash;4Hz,<span>&nbsp;</span></p>\n<p>which signifies deepest meditation or dreamless sleep</p>\n<p><strong>Theta</strong>, from approximately 4Hz&ndash;8Hz,<span>&nbsp;</span></p>\n<p>signifying meditation or deep sleep.<span>&nbsp;</span></p>\n<p><strong>Alpha</strong>, from approximately 8Hz&ndash;13Hz,<span>&nbsp;</span></p>\n<p>representing quietly flowing thoughts.</p>\n<p><strong>Beta</strong>, from approximately 13Hz&ndash;30Hz,<span>&nbsp;</span></p>\n<p>which is a normal waking state.</p>\n<p>And<span>&nbsp;</span></p>\n<p><strong>Gamma</strong>, from approximately 30Hz&ndash;42Hz<span>&nbsp;</span></p>\n<p>which is most active during simultaneous processing of information that engages multiple different areas of the brain.</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>History of EEG use in music:</strong></p>\n<p>Physicist Edmond Dewan began the study of brainwaves in the early 1960s and developed a &lsquo;brainwave control system&rsquo;.<span>&nbsp; </span>The system detected changes in alpha rhythms which were used to turn lighting on or off. &ldquo;The light could also be replaced by &lsquo;an audible device that made a beep when switched on&rsquo;, allowing Dewan to spell out the phrase &lsquo; <em>I can talk</em> &rsquo; in Morse code&rdquo;. [8] Dewan met experimental composer Alvin Lucier which inspired the first actual brainwave composition.</p>\n<p>Alvin Lucier first performed <em>Music For Solo Performer</em> in 1965. It involved the composer sitting in a chair on stage, with his eyes closed while his brainwaves were recorded.<span>&nbsp; </span>The data from the recording was amplified and distributed to speakers set up around the room.<span>&nbsp; </span>The speakers were placed against different types of percussion instruments, so the vibration of the speakers would cause the instrument to sound.</p>\n<p>Lucier was able to control the percussion events through control of his cognitive functions, and found that a break in concentration would disrupt that control.<span>&nbsp; </span>Although mastery over the alpha rhythm was (and is) difficult, <em>Music for the solo performer</em> greatly contributed to the field of experimental music and illustrated the depth of possibility in using EEG control over musical performance.</p>\n<p>Computer scientist Jaques Vidal published the paper <em>Toward Direct Brain-Computer Communication </em>in 1973, which first proposed the Brain-Computer Interface (BCI), which is a means of using the brain to control external devices.<span>&nbsp;</span></p>\n<p>This was the very beginning of BCMI research, which has evolved into an interdisciplinary field of study &ldquo;at the crossroads of music, science and biomedical engineering&rdquo; [11]. BCMIs (also referred to Brain Machine Interfaces, or BMIs) are still in use today, and the field of research around them is in its infancy.</p>\n<p>&nbsp;</p>\n<p><strong>Project Overview:</strong></p>\n<p>This project records EEG signals from the subject using four non-invasive dry extra-cranial electrodes from a commercially available MUSE EEG headband. Measurements are recorded from the TP9, AF7, AF8, and TP10 electrodes, as specified by the International Standard EEG placement system, and the data is converted to absolute band powers, based on the logarithm of the Power Spectral Density (PSD) of the EEG data for each channel. Heart rate data is obtained through PPG measurements, although that data is not used in the current version of this project. EEG measurements are recorded in Bels/Db to determine the PSD within each of the frequency ranges.</p>\n<p>The EEG readings are translated into music in real time, and the subjects are instructed to employ deep breathing exercises while they focus on the musical feedback. <br><br>Great care was taken in defining the compositional strategies of the interactive content in order to deliver a truly generative composition that was also capable of producing musically recognizable results.<span>&nbsp;</span></p>\n<p>All permutations of the scales, modes and chords being used, as well as rhythms, and performance characteristics, needed to be considered beforehand so the extraction of a finite set of parameters from the EEG data set could be parsed and used to produce a well-formed and dynamic piece of music.</p>\n<p>There are 3 main sections of this Max patch:</p>\n<p>1: The <strong>EEG data capture</strong> section.</p>\n<p>2: The <strong>EEG data conversion</strong> section.</p>\n<p>3: the<strong> Sound generation and DSP</strong> section.</p>\n<p>The <strong>EEG data capture</strong> section receives EEG data from the Muse headband, which is converted to OSC data and transmitted over WiFi via the iOS app Mind Monitor.<span>&nbsp; </span>That data is then split into the five separate brainwave frequency bandwidths: delta, theta, alpha, beta and gamma.<span>&nbsp; </span>Additional data is also captured, including accelerometer, gyroscope, blink and jaw clench, in order to control for any artifacts in the data capture.<span>&nbsp; </span>Sensor connection data is used to visualize the integrity of the sensor&rsquo;s attachment to the subject. PPG data is also captured for use in a future iteration of the project.</p>\n<p>The <strong>EEG data conversion</strong> section accepts the EEG bandwidth data representing specific event-related potential, and translates it to musical events.<span>&nbsp;</span></p>\n<p>First, significant thresholds for each brainwave frequency bandwidth are defined.<span>&nbsp; </span>These are chosen based on average EEG measurements taken prior to the use of the musical feedback. When those thresholds are reached or exceeded, an event is triggered.<span>&nbsp; </span>Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics.</p>\n<p>&nbsp;</p>\n<p>This section is comprised of three subsections that format their data output differently, depending on the use case: <br>1. <strong>Internal Sound Generation and DSP</strong> for use completely within the Max environment.</p>\n<p>2. <strong>External MIDI</strong> for use with MIDI equipped hardware or software.</p>\n<p>and<span>&nbsp;</span></p>\n<p>3. <strong>External Frequency</strong> <strong>and gate</strong>, for use with modular synthesizer hardware.</p>\n<p>Each of these can be used separately or simultaneously, depending on the needs of the piece.<span>&nbsp;</span></p>\n<p>For the data conversion, the event-related potentials are mapped in the following way:<br>Changes in <strong>alpha</strong>, relative to the predefined threshold, govern the triggering of notes, as well as the scale and mode.</p>\n<p>Changes in <strong>theta</strong>, relative to the threshold, influence note value.<span>&nbsp;</span></p>\n<p>Changes in <strong>beta</strong>, relative to the threshold, influence spatial qualities like reverberation and delay.</p>\n<p>Changes in <strong>delta</strong>, relative to the threshold, influence the degree of spatial effects.</p>\n<p>Changes in <strong>gamma</strong>, relative to the threshold, influence timbre.</p>\n<p>Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis or set of standards.</p>\n<p>The third section is <strong>Sound generation and DSP</strong>. It is responsible for the sonification of the data translated from the <strong>EEG data conversion</strong> section. This section includes synthesis models, timbre characteristics, and spatial effects.</p>\n<p>This projects uses three synthesized voices created in Max 8 for the generative musical feedback.<span>&nbsp; </span>There are two subtractive voices that each use a mix of sine, sawtooth and triangle waves, and one fm voice. <span>&nbsp;</span></p>\n<p>The timbral effects employed are waveform mixing, frequency modulation, and high pass, band pass and low pass filters. The spatial effects used include reverberation, and delay.<span>&nbsp; </span>In addition to the initial settings of the voices, each of the timbral and spatial effects are modulated by separate event-related potential data captured by the EEG.</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>Conclusions:</strong></p>\n<p>&nbsp;</p>\n<p>This project is a contemporary interpretation of an idea I've been interested in for many years, starting with investigation into bidirectional EKG biofeedback.<span>&nbsp;</span></p>\n<p>My initial experience with the subject was during a university degree in psychophysics (a branch of psychology). Some promising research at the university focused on reducing stress in asthmatic subjects for the purposes of lessening the frequency of attacks. [12]</p>\n<p>At the time, the technology required to explore this idea was of considerable size, and prohibitively expensive, for all but medical or formally funded academic purposes. With the current availability of low-cost electroencephalography (EEG) devices and heart rate monitors, the possibility of autonomous exploration of these concepts has become a reality.</p>\n<p>The procedure, when using this work for the exploration of the physiological effects of neuro- and bi-directional feedback, starts with obtaining and comparing 2 data sets: a control and a therapeutic data set.<span>&nbsp; </span>The control set records EEG data without utilizing musical feedback or breathing exercises.<span>&nbsp; </span>The therapeutic set records EEG data with the feedback and breathing exercises.</p>\n<p>&nbsp;</p>\n<p>Although this project is primarily concerned with changes in the alpha EEG brainwave frequency range, changes in other frequency ranges were used to trigger events in the feedback. This approach was adopted to ensure that a subject&rsquo;s loss of focus (and/or a drop in the PSD of alpha) would not negatively affect the generation of novel musical feedback, and with the help of consistent feedback, the subject would be able to return their focus and continue. Depending on the subject&rsquo;s state of relaxation (and the PSD of the other four EEG frequency ranges measured), the performance and phrasing of the musical feedback would change in such a way as to encourage greater focus.</p>\n<p>For the initial proof of concept trials, I tested myself and a small sampling of other subjects. Preliminary data shows that alpha readings were higher, on average, during the therapeutic phase.<span>&nbsp; </span>Also, a higher overall peak value was achieved during the therapeutic phase This suggests that this feedback model is an effective way of increasing activity in the alpha brainwave frequency range, which is the beneficial physiological and psychological effect I was hoping to find, although much more data needs to be collected before any definitive conclusions can be drawn. At this point, the system has been tested and is functional, and further research can begin. The modular design of the work allows for most any variable to be included or excluded, which will be necessary moving forward with the research, in order to more thoroughly test the foundational elements of the thesis, as well as any musicological exploration and analysis that defining the feedback raises.<span>&nbsp; </span><br><br>In the meantime, I am already using the software as a compositional system to create recorded works and live soundtracks. I am also planning to mount the project as an interactive installation in a gallery setting.</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>Contact Details:</strong></p>\n<p>&nbsp;</p>\n<p>Johnny Tomasiello<br><br><a href=\"mailto:johnnytomasiello@gmail.com\">johnnytomasiello@gmail.com</a><br><br></p>\n<p>&nbsp;</p>\n<p><strong>Credits &amp; Acknowledgments:</strong></p>\n<p>IRCAM</p>\n<p>Cycling &rsquo;74</p>\n<p>Carol Parkinson, Executive Director of Harvestworks</p>\n<p>Melody Loveless, NYU &amp; Max certified trainer</p>\n<p>Dr. Paul M. Lehrer and Dr. Richard Carr</p>\n<p>InteraXon Muse electroencephalography headband<span>&nbsp;</span></p>\n<p>James Clutterbuck (Mind Monitor developer)</p>\n<p>&nbsp;</p>\n<p><strong>References:</strong></p>\n<p>&nbsp;</p>\n<p><strong>[1] &ldquo;Mental Emotional Sentiment Classification with an EEG-based Brain-Machine Interface.&rdquo;<span>&nbsp;</span></strong></p>\n<p>Bird, Jordan J.; Ekart, Aniko; Buckingham, Christopher D.; Faria, Diego R., 2019</p>\n<p>&nbsp;</p>\n<p><strong>[2] &ldquo;Effects of mental state on heart rate and blood pressure variability in men and women.&rdquo;<span>&nbsp;</span></strong></p>\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Madden+K&amp;cauthor_id=8590551\">K Madden</a>&nbsp;,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Savard+GK&amp;cauthor_id=8590551\">G K Savard</a>, 1995</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>[3] &ldquo;How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness?&rdquo;<span>&nbsp;</span></strong></p>\n<p>Francesco Riganello,* Maria D. Cortese, Francesco Arcuri, Maria Quintieri, and Giuliano Dolce, 2015</p>\n<p>&nbsp;</p>\n<p><strong>[4] Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications</strong></p>\n<p><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Marzbani%20H%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Hengameh Marzbani</strong></a><strong>, </strong><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Marateb%20HR%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Hamid Reza Marateb</strong></a><strong>,</strong> <strong>and </strong><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Mansourian%20M%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Marjan Mansourian</strong></a><strong>,</strong><strong> 2016</strong></p>\n<p>&nbsp;</p>\n<p><strong>[5] Stress Management Techniques: Are They All Equivalent, or Do They Have Specific Effects?</strong></p>\n<p>Paul M. Lehrer and Richard Carr, 1994</p>\n<p>&nbsp;</p>\n<p><strong>[6] Alpha activity and cardiac correlates: three types of relationships during nocturnal sleep</strong></p>\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Ehrhart+J&amp;cauthor_id=10802467\">J Ehrhart</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Toussaint+M&amp;cauthor_id=10802467\">M Toussaint</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Simon+C&amp;cauthor_id=10802467\">C Simon</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Gronfier+C&amp;cauthor_id=10802467\">C Gronfier</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Luthringer+R&amp;cauthor_id=10802467\">R Luthringer</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Brandenberger+G&amp;cauthor_id=10802467\">G Brandenberger</a>, 2000</p>\n<p>&nbsp;</p>\n<p><strong>[7] &ldquo;A Composer's Confessions\"<span>&nbsp;</span></strong></p>\n<p>John Cage, 1948<span>&nbsp;</span></p>\n<p>&nbsp;</p>\n<p><strong>[8] Brainwaves in concert: the 20th century sonification of the electroencephalogram<br></strong>Bart Lutters, Peter J. Koehler, 2016<span>&nbsp;</span></p>\n<p>&nbsp;</p>\n<p><strong>[9] The Berger Rhythm: Potential Changes From The Occipital Lobes in Man,<span>&nbsp;</span></strong></p>\n<p>Adrian, Matthews.1934</p>\n<p>&nbsp;</p>\n<p><strong>[10] How To Interpret an EEG and its Report</strong></p>\n<p>Marie Atkinson, MD, 2010</p>\n<p>&nbsp;</p>\n<p><strong>[8] Brainwaves in concert: the 20th century sonification of the electroencephalogram<br></strong>Bart Lutters, Peter J. Koehler, 2016</p>\n<p>&nbsp;</p>\n<p><strong>[11] Brain-Computer Music Interfacing: Interdisciplinary Research at the Crossroads of Music, Science and Biomedical Engineering<br></strong>Miranda, ER 2014</p>\n<p>&nbsp;</p>\n<p><strong>[12] Relaxation and Music Therapies for Asthma Among Patients Prestabilized on Asthma Medication</strong></p>\n<p>Paul Lehrer, Et al. 1994</p>",
        "topics": [],
        "user": {
            "pk": 18362,
            "forum_user": {
                "id": 18355,
                "user": 18362,
                "first_name": "Johnny",
                "last_name": "Tomasiello",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4b62dafc53dcbf42b1b50f617668de0a?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-02-13T13:18:35.802851+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Johnny_Tomasiello",
            "first_name": "Johnny",
            "last_name": "Tomasiello",
            "bookmarks": []
        },
        "slug": "moving-towards-synchrony-1",
        "pk": 1131,
        "published": false,
        "publish_date": "2022-03-21T12:53:05.048775+01:00"
    },
    {
        "title": "Garden of Sensory Delights",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p><span>&lsquo;Pairi-daēza&rsquo; means &lsquo;walled garden&rsquo; in Persian, and is the origin of the Hebrew word for orchard, &lsquo;pardes&rsquo;, and also the origin of the English word for Paradise. Pairi-daēza is also related to &lsquo;parigauda&rsquo;, meaning a screen, which is the root of the Hebrew, &lsquo;pargod&rsquo; - a key concept in Jewish mysticism that describes our veiled relationship to immanence. Pargod is&nbsp; also related to &lsquo;parochet&rsquo;- the curtain veil in the Jerusalem temple&rsquo;s holy of holies that only the high priest could enter once a year on Yom Kippur to offer atonement sacrifices of blood and incense.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>This installation brings a mystical paradise to life through immersion of the audience. Inspired by Hieronymus Bosch&rsquo;s uncanny </span><em><span>Garden of Earthly Delights</span></em><span>, a triptych of Eden screens and veils encloses a garden, into which the audience enters. As they do so, they trigger projections that bathe the audience in holographic flowers, plants and fruit trees. Continuing to explore, they are immersed in lush sounds, such as irrigating water and pollinating insects. Wafts of fresh, aromatic fragrances create a mystical veil while hallucinatory voices lure them to cross the threshold- but are they first ready to confront what lurks in their shadow?</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>This installation explores the potential of holography as an augmented reality that allows people to be immersed in physical and virtual worlds simultaneously while feeling equally present in both. Holography is a powerful medium to represent yearnings beyond material presence to experience something more immaterial.&nbsp; </span></p>",
        "topics": [
            {
                "id": 1194,
                "name": "augmented reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1193,
                "name": "  holography",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 303,
                "name": "Immersion",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1195,
                "name": " installation ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32967,
            "forum_user": {
                "id": 32919,
                "user": 32967,
                "first_name": "Alexandra",
                "last_name": "Topaz",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f76b5410cca72692cd37e25ebf487ce4?s=120&d=retro",
                "biography": "Alexandra is an exhibition designer, curator, and educator from Jerusalem, currently based in London. Her practice is situated at the intersection of spatial design, exhibitions, 3D modelling and moving image in various experimental formats and research methods. \n\nHer current research explores the period room as a display strategy within museums, and architecture's role in generating narratives. Alexandra most recently exhibited 'A Room With A View' installation for the LG OLED exhibit Luminous in London, and 'Shema(nis) - what lurks in culture's shadow?' audio-visual performance at IKLECTIK, London.\n\nPreviously, Alexandra worked as an exhibition designer at the Israel Museum Jerusalem, Israel; visiting lecturer at the Architectural Department at Bezalel Academy of Arts and Design, Jerusalem; and independent curator and exhibition designer of various exhibitions. \n\nAlexandra received her B.Arch (Cum Laude) from Bezalel Academy of Arts and Design in Jerusalem (2016). In 2021, she was selected as a recipient of the Clore-Bezalel Scholarship for her studies at the Royal College of Art.",
                "date_modified": "2023-02-28T21:15:58+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alexandra",
            "first_name": "Alexandra",
            "last_name": "Topaz",
            "bookmarks": []
        },
        "slug": "garden-of-sensory-delights",
        "pk": 2101,
        "published": true,
        "publish_date": "2023-02-28T22:05:10+01:00"
    },
    {
        "title": "Festival Wandering Music, édition 2024, au C-LAB",
        "description": "Présentation des lauréats du Prix ISAC 2024 pour l'ambisonic au festival Wandering Music au C-LAB (à Taipei, Taiwan), le 30 août à 17h, 19h et 20h (heure locale).",
        "content": "<h3><a href=\"https://clab.org.tw/events/2024_wandering/\">Festival&nbsp;Wandering Music, &eacute;dition 2024,&nbsp;au C-LAB</a></h3>\r\n<p>-</p>\r\n<p><em><span>Ambisonic Shocks &amp; Strata of Senses&nbsp;</span></em>: travaux laur&eacute;ats du Prix ISAC 2024</p>\r\n<p><strong>Paolo Montella, Andrea Laudante, Giuseppe Pisano (1er prix)</strong>, Ralph Killhertz (2e prix), Natasha Barrett (3e prix)</p>\r\n<p>Spatial Audio Field, C-Lab Taiwan Sound Lab</p>\r\n<p>-</p>\r\n<p>Vendredi 30 ao&ucirc;t -&nbsp;17:00, 19:00, 20:00</p>\r\n<p>Dur&eacute;e : 30 minutes</p>\r\n<p>-</p>\r\n<p>ISAC 2024 (International Sonosfera Ambisonics Competition) se concentre sur les nouvelles expressions artistiques dans divers domaines de la production musicale, gr&acirc;ce &agrave; la technologie avanc&eacute;e de l'ambisonie. Les &oelig;uvres gagnantes ont &eacute;t&eacute; pr&eacute;sent&eacute;es dans deux lieux de renomm&eacute;e mondiale pour l'&eacute;coute acousmatique High-Order Ambisonics : &agrave; l'Espace de Projection de l'Institut de Recherche et Coordination Acoustique/Musique (IRCAM) et Sonosfera&reg;, un amphith&eacute;&acirc;tre technologique mobile con&ccedil;u et r&eacute;alis&eacute; par David Monacchi. Le festival Wandering, en 2024, pr&eacute;sente sp&eacute;cialement ces &oelig;uvres prim&eacute;es ; le public fera l'exp&eacute;rience d'un voyage &eacute;tonnant bas&eacute; sur la perception audio de leurs sons multicouches aux styles divers et aux structures complexes.</p>\r\n<p>-</p>\r\n<h4><em>Non &egrave; un compendio di etologia numerico-digitale</em></h4>\r\n<h4>(Il ne s'agit pas d'un recueil d'&eacute;thologie num&eacute;rique)</h4>\r\n<h4>Paolo Montella, Andrea Laudante, Giuseppe Pisano (1er prix)</h4>\r\n<p>Les mots du jury : \"Un voyage immersif au c&oelig;ur de paysages imaginaires, o&ugrave; des cr&eacute;atures num&eacute;riques ressemblant &agrave; des b&ecirc;tes errent au milieu de volcans plasmatiques, de d&eacute;serts tranchants et de lagons diaphanes. Ce qui distingue cette composition, ce n'est pas seulement sa richesse sonore, mais aussi l'esprit collectif qui a pr&eacute;sid&eacute; &agrave; sa cr&eacute;ation. Montella, Laudante et Pisano ont fait voler en &eacute;clats la notion de composition solitaire, optant plut&ocirc;t pour une approche collaborative\".<br />-</p>\r\n<h4><em>Transformations: Music for a Destinationless Journey</em></h4>\r\n<h4>(Transformations : Musique pour un voyage sans destination)</h4>\r\n<h4>Ralph Killhertz (2e prix)</h4>\r\n<p><span>Les mots du jury</span> : \"Transformations nous invite &agrave; un voyage audacieux &agrave; travers des territoires sonores inexplor&eacute;s, o&ugrave; les voix et les gongs servent de catalyseurs &agrave; des m&eacute;tamorphoses &eacute;nerg&eacute;tiques au sein de la psych&eacute; humaine. Gr&acirc;ce &agrave; une exploration magistrale de l'essence primitive du son, Killhertz cr&eacute;e une tapisserie envo&ucirc;tante de textures timbrales et rythmiques, transcendant la narration musicale conventionnelle.\"<br />-</p>\r\n<h4><em>Incredible Moments from Venice: The Other Side of the Lagoon</em></h4>\r\n<h4>(Incroyables moments de Venise : L'autre c&ocirc;t&eacute; de la lagune)</h4>\r\n<h4>Natasha Barrett (3e prix)</h4>\r\n<p>Les mots du jury : \"La composition de Natasha Barrett incarne l'essence de l'exploration artistique et de la cr&eacute;ativit&eacute;. Dans Impossible Moments from Venice, elle capture magistralement l'allure &eacute;nigmatique de l'une des villes les plus embl&eacute;matiques du monde, Venise, et transcende sa r&eacute;alit&eacute; tangible en un royaume d'imagination auditive.\"</p>\r\n<p>-</p>\r\n<p>Remerciements &agrave;|<strong>Paolo Montella, Andrea Laudante, Giuseppe Pisano</strong>, Ralph Killhertz, Natasha Barrett</p>\r\n<p>-<br />Co-organisateur|Institut de recherche et coordination acoustique/musique (IRCAM), Pesaro Ville cr&eacute;ative de la musique de l'UNESCO</p>\r\n<p>-</p>\r\n<p><img src=\"/media/uploads/sonosfera_(pesaro,_italy)_from_fragments_of_extinction_by_david_monacchi_(photo_alex_d'emilia).jpg\" width=\"750\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Sonosfera&reg; (Pesaro, Italie) : extrait de Fragments of Extinction de David Monacchi<br />Photo d'Alex d'Emilia</p>",
        "topics": [],
        "user": {
            "pk": 50102,
            "forum_user": {
                "id": 50042,
                "user": 50102,
                "first_name": "Adele",
                "last_name": "Dessard",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_3683.JPG",
                "avatar_url": "/media/cache/94/16/9416b5834c723d00ccbfdc4e0d77a3b0.jpg",
                "biography": "Adèle Dessard is the Events and Sales Coordinater of Forum IRCAM, working with Paola Palumbo, Greg Beller, Guillaume Piccarreta and Hugues Vinet.\nThe Forum IRCAM is the community of users of IRCAM softwares that include the platform forum.ircam.fr and Forum Workshops where artists and scientists of all around the world reunite yearly.\n\nShe received a Master's degree in Applied Economics specialized in Cultural and Digital Economics at Université Paris 1 Panthéon Sorbonne.\nAfter working in the Movie Industry, specifically in documentary distribution (communication and partnership manager, and distribution assistant), she engaged in the Live Industry working for Theatre using spatialized and immersive sound (production, sound designer and stage assistant). \nShe also participated in the Operating of multiple music festivals (Les Nuits du Botaniques, Bruxelles ; We Love Green, Paris ; Les Transmusicales, Rennes ; MaMa Music Convention, Paris ; DreamNation Festival, Paris ; Les Z'Eclectiques, Angers ; Le Bel Air Festival, Toulouse ; Peacock Society, Paris ; etc.) and of the Boula Pop association organizing the Sofar Sounds concerts in Paris.",
                "date_modified": "2024-10-11T12:29:26.260165+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 390,
                        "forum_user": 50042,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 476,
                                "membership": 390
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "dessard",
            "first_name": "Adele",
            "last_name": "Dessard",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 123,
                    "user": 50102,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 124,
                    "user": 50102,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 117,
                    "user": 50102,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "festival-wandering-music-edition-2024-au-c-lab",
        "pk": 2952,
        "published": true,
        "publish_date": "2024-02-08T18:30:29+01:00"
    },
    {
        "title": "Sonic world building: \"Embracing, Unravelling and Smashing the Fantasy\" by Juice",
        "description": "Embracing, Unravelling, and Smashing the Fantasy (2025) by Juice is a concert and a sound art performance employing DIY acoustic and electroacoustic instruments, combined with structured and improvised live cello by Cellist Santi Lowe and contemporary dance by dancer&movement artist SIQI CHEN. The work reimagines nine Western fairytales and Eastern folktales through an audio-led experience that challenges nostalgic interpretations and reconsiders the cultural narratives embedded in childhood stories.",
        "content": "<h5 id=\"➡️-this-presentation-is-part-of-ircam-forum-workshops-paris-engh\"><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 202</a></strong></h5>\r\n<p>Drawing from <em>The Deer of Nine Colours, Hua Mulan, The Magic Flute, The Nutcracker, The Little Mermaid, The Red Shoes, Pinocchio,</em> and <em>The Nightingale and the Rose</em>, the performance explores compassion, valour, enlightenment, imagination, sacrifice, obsession, curiosity, transformation, and devotion as fundamental aspects of human nature. Together, these tales form a layered sonic landscape that reflects cultural diversity, moral tension, and philosophical inquiry, grounded in their original contexts while resonating with contemporary experience.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/55d079d86e9672e4c63fdf712548512f.jpg\" /></p>\r\n<p>My project employed a self-developed method termed <strong>sonic world-building</strong>, realised through an audio-led performance.</p>\r\n<p>The definition of <em>sonic world-building</em> is informed by sonic fiction, particularly Goodman (2010) and Schulze (2020), who position it as an &ldquo;element of affective culture&rdquo; that offers both guidance and provocation. By exploring sound as a form of liberation and as a speculative narrative device, the project examines how sonic world-building functions as a dramaturgical tool in contemporary theatre and how it facilitates collective performance with the audience.</p>\r\n<p>Both diagrams were presented during the performance.</p>\r\n<h3>Sonic World-Building | A Dramaturgical Tool</h3>\r\n<p>The creation of coherent worlds with distinct histories, geographies, and cultures is central to speculative storytelling (Stableford, 2004). Game designer Jia Ren Liao emphasises that constructing narrative through sound&mdash;where each sound effect and musical element aligns with narrative intent&mdash;constitutes sonic world-building. My approach maps sonic elements across six dimensions within the pre-existing worlds of the selected fairytales and folktales (science, society, politics, philosophy, environment, and economy), where prefabricated, present, and unpredictable sounds interact to shape the performance world.</p>\r\n<h3>Sonic World-Building | Collective Performance and Critical Listening</h3>\r\n<p>Performance art has a profound capacity to connect the performer to the present moment (Schechner, 2003). Sound artist Kate Carr argues that sonic world-building reveals how movement, interaction, and materiality influence the ways in which worlds are constructed and dismantled. Tom Tlalim suggests that critical listening resists dominant sonic meanings, enabling more reflective, collective, and open-ended forms of world-building. Informed by these perspectives, as well as <em>The Map of Sonic Creativity</em> (Knight-Hill &amp; Margetson, 2023), I developed a participatory map to guide my practice.</p>\r\n<p>This project was created primarily using Pro Tools and Reaper for spatial audio, ASAP and SPAT.</p>\r\n<p>Credits:</p>\r\n<p><strong>Juice </strong>is a sound artist and a freelance composer, who also&nbsp;spans classical paintings and illustrations, three-dimensional contemporary sculpture and film, and reaches into a fourth dimension characterised by spatial sound design and digital media manipulation. Her recent sound-based work delves into the equilibrium and collision of pleasurable, disorganised, and structured sounds and noises within compositions. <br /><a href=\"http://www.juiceportfolio.com/\">http://www.juiceportfolio.com</a></p>\r\n<p><strong>Santiago Lowe&nbsp;</strong>is a member of Lowe Ensemble and has also worked as the principal cellist of the Early Music group Ars Combinatoria (Galicia, Spain) with whom he recorded J. S. Bach's Passion according to St. John and new compositions by contemporary composers. Santiago has been awarded a Leverhulme Trust Arts Scholarship. <br /><a href=\"https://www.loweensemble.com/santiago-lowe\">https://www.loweensemble.com/santiago-lowe</a></p>\r\n<p><strong>Siqi Chen</strong> is a dancer and choreographer trained in Chinese classical dance, folk dance, modern dance and ballet. She has performed across China and the UK, collaborating on diverse projects that blend art forms.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b188e4ba5ab9e9471333b7fc859b9782.jpg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/65c872bfb24be71bf1e935fe99459b11.jpg\" /></p>",
        "topics": [
            {
                "id": 4141,
                "name": "#immersive theatre",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 404,
                "name": "#improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4142,
                "name": "#istrument design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1867,
                "name": "storytelling",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 154034,
            "forum_user": {
                "id": 153810,
                "user": 154034,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/10801080_Juice.jpg",
                "avatar_url": "/media/cache/10/a6/10a6647888a0a48e042f425d63dd7961.jpg",
                "biography": "Juice is a sound artist and freelance composer who also works across painting, sculpture, film, and performance. Her practice explores the equilibrium and collision between structured and chaotic sound, creating immersive sonicworlds that question time, perception, and history. Her commissioned works—including Space Concert, Archive of Fragility, and Alice—have been presented at the Science Museum, Outernet, and the Barbican Centre. She has performed at Café OTO, West Bund Museum, and the Horniman Museum and Gardens.",
                "date_modified": "2026-02-11T23:31:55.872346+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "juice",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sonic-world-building-embracing-unravelling-and-smashing-the-fantasy-by-juice",
        "pk": 4298,
        "published": true,
        "publish_date": "2026-02-02T14:49:54+01:00"
    },
    {
        "title": "The Birdfansyer's Delight or Choice — Idiomatic Quantisation — CAC-project by Dagfinn Koch",
        "description": "My project, in the long run, is about mapping the span from traditional to non-traditional (extended techniques) playing techniques for clarinet, guitar, glass harmonica and viola. I wish to develop a library (a plugin) for the computer-assisted composition software Open Music from IRCAM. Using the library, one can assess the playability of musical material. The library can also suggest idiomatic (quantisation) solutions and solutions that challenge given conventions. The time dimension is very important. How long does it take to move from one tone to the next, from one chord to the next, or from one technique to another must be mapped. One can imagine this expressed as percentages. For example, the library may be asked to produce results in which 80 per cent of the musical material falls within traditional idiomatic practice.\r\n\r\nApple Keynote presentation.",
        "content": "<p><strong>Context</strong></p>\r\n<p>In the pursuit of new ways of expression, composers and instrument makers have, over the centuries, explored untraditional playing techniques to produce original sound combinations. The symbolic notation has been central to the development of Western art music. It provides the possibility to record existing musical sequences, construct musical material in contact with the paper and, in particular, serve as instructions for performers to reproduce music. The reproduction is physical, and in the pursuit of &ldquo;nie erh&ouml;rte Kl&auml;nge&rdquo; (Sch&ouml;nberg), the idea the composer has had has been difficult to produce. Composers have blamed unwilling musicians, while musicians have blamed incompetent composers. I am a viola player myself and have a long experience as a chamber and orchestral musician, primarily at a semi-professional level. I do not know of many composers who are competent instrumentalists.</p>\r\n<p>I would like to map traditional and less traditional techniques to test a given note material against playability using artificial intelligence. Elvio Cipollone has touched upon something somewhat similar in his work with OM-Virtuoso, as described in an article in The OM Composer&rsquo;s Book 2. But he hasn&rsquo;t further developed the idea, and it was limited to one work and one instrument, the clarinet. Apart from this, as far as I know, there hasn&rsquo;t been much research done in this area. The research group at IRCAM Centre Pompidou is interested in the idea, which I think confirms the lack of research. Symbolic notation and quantisation are key areas of their research in musical representation.<br /><br /><strong>Important&nbsp;detour</strong></p>\r\n<p>On the Baroque transverse flute with one key, one can achieve a consistent tone across the entire register only in the keys of G major and D major. I recorded Professor Hans Olav Gorset playing all 41 pitches without adjustments. All tones are recorded for both classification and mapping in a software sampler.</p>\r\n<p>I&rsquo;ve mapped the instruments' tonal possibilities and fingerings to use machine learning to create datasets for algorithms realised from scratch in Open Music, producing &ldquo;scales&rdquo; that result in a consistent tone, the opposite, or transformations between these. As it turned out, one can understand the necessity that Theobald B&ouml;hm (1794-1881) developed a system of keywork and a fingering system for the flute.&nbsp;&nbsp;</p>\r\n<p>Software: Open Music (IRCAM), Kontakt (Native Instruments) and Visual Studio Code (AI code editor).<span>&nbsp;</span></p>\r\n<p><strong>Artistic method/process</strong></p>\r\n<p>My project, in the long run, is about mapping the span from traditional to non-traditional (extended techniques) playing techniques for clarinet, guitar, glass harmonica and viola. I wish to develop a library (a plugin) for the computer-assisted composition software Open Music from IRCAM. Using the library, one can assess the playability of musical material. The library can also suggest idiomatic (quantisation) solutions and solutions that challenge given conventions. The time dimension is very important. How long does it take to move from one tone to the next, from one chord to the next, or from one technique to another must be mapped. One can imagine this expressed as percentages. For example, the library may be asked to produce results in which 80 per cent of the musical material falls within traditional idiomatic practice. In addition, the library can serve as a model for creating electroacoustic music by providing a virtual instrument based on the characteristics of an acoustic instrument. (The Diophone Studio software from IRCAM can do this at a micro level.)</p>\r\n<p>The development of the directory (library) will be one of the three pillars in my work. (The other two are a historical-philosophical reflection around the body, machine, and instrument and the third of the mentioned seven short compositions.) I have been using computer-assisted compositional software for 28 years.</p>\r\n<p><strong>Results<span>&nbsp;</span></strong></p>\r\n<p>The project will lead to the development of a library for the Open Music software, to be distributed and maintained by IRCAM. I imagine a concert at the end of the program period, in both Oslo and Paris. As mentioned, the written reflection is thought to be a historical-philosophical reflection around the body, machine, and instrument. The project is open to articles and presentations under the patronage of the Norwegian Music School and IRCAM. What I may arrive at could also serve as a starting point for discussions about the use of the material, in particular, and the idea of form, in general, in composition.</p>\r\n<p>I want to show how important it is for composers to learn how instruments work. Yes, how important it is to be a competent practitioner yourself in order to become a good composer. My impression is that younger composers at least have a lesser understanding of this. They are used to having a computer as a user interface and don&rsquo;t get the real physical experience an instrument provides.</p>\r\n<p>On the other hand, I see that it may be possible to excite the younger generation about acoustic instruments through their interest in computers, even though it is too late to become capable instrumentalists. But one can contribute to a change of attitude, as computers are a perfect tool for working with formalised music. Hopefully, I will be able to enthuse those who are sceptical about computer-assisted composition by highlighting its benefits. A computer does not exclude the human aspect as long as one is willing to understand that an instrumentalist isn&rsquo;t primarily interested in the work, but rather in what the physical instruction notation can provide in expression possibilities. Virtuosity doesn&rsquo;t have to mean mastering something &ldquo;unplayable&rdquo;. It may also mean the pleasure of playing something that lies well for the instrument (including extended techniques), which in turn contributes &ldquo;to the instrument revealing its spirit.&rdquo; (Bis das Instrument seinen Geist offenbart. Klaus K. H&uuml;bler)</p>\r\n<p>Translated from Norwegian by Malin Kjelsrud and Dagfinn Koch<br /><img alt=\"Project Image\" src=\"https://forum.ircam.fr/media/uploads/user/3a70937b5cb8aadd4cbd1bcc5860ddb1.jpeg\" /><br /><br /><br /><br /></p>",
        "topics": [
            {
                "id": 445,
                "name": "Accoustique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1906,
                "name": "Birdsongs",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 954,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 335,
                "name": "Instrumental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 84,
            "forum_user": {
                "id": 84,
                "user": 84,
                "first_name": "Dagfinn",
                "last_name": "Koch",
                "avatar": "https://forum.ircam.fr/media/avatars/Call-ParisEnghien-Dagfinn-Koch-headshot1.JPG",
                "avatar_url": "/media/cache/7e/98/7e98ae1d5482ba4477ac7f1b13fb1805.jpg",
                "biography": "Dagfinn Koch (b. 1964) in Kristiansund, a city on the western coast of Norway, built on a cluster of islands. He attended the city music school, where he studied the viola and piano from 1972 to 1980. At the high school from 1980 to 1983, following the music program. He played the viola in the semi-professional Kristiansund Symphony Orchestra from 1979 to 83, and in different chamber ensembles during those years.\n\nIn 1983, he was accepted as a composition student at the Norwegian Academy of Music under Professor Lasse Thoresen until 1988. From 1991 to 1993, composition under Professor Dr. h.c Witold Szalonek at the Hochschule (since 2001 Universität) der Künste Berlin. \n\nHis growing reputation as a composer led to his acceptance as a member of the Norwegian Composers Society in 1991 and later, of NOPA (society for composers and authors of popular music) in 2017.\n\nHe has lived in Germany for a total of 11 years. Two of them are in Berlin, and nine are in Lübeck. \n\nHe’s a roman catholic and a Lay Dominican within the catholic Order of Preachers. The Norwegian Dominicans belong to the French province in Paris.   \n\nSigned by Norsk musikkforlag as an in-house composer in September 2022.",
                "date_modified": "2026-03-04T10:43:35.100462+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1269,
                        "forum_user": 84,
                        "date_start": "2025-12-19",
                        "date_end": "2026-12-19",
                        "type": 0,
                        "keys": [
                            {
                                "id": 969,
                                "membership": 1269
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "DagfinnKoch",
            "first_name": "Dagfinn",
            "last_name": "Koch",
            "bookmarks": []
        },
        "slug": "the-birdfansyers-delight-or-choice-idiomatic-quantisation-cac-project",
        "pk": 4451,
        "published": true,
        "publish_date": "2026-03-03T10:19:28+01:00"
    },
    {
        "title": "IRCAM Tutorials / Melodic Scale (Max For Live device)irca",
        "description": "\n\nGreg Beller, Product Manager @IRCAM Forum\n\n",
        "content": "<p><span class=\"style-scope yt-formatted-string\" dir=\"auto\">Download: </span><a class=\"yt-simple-endpoint style-scope yt-formatted-string\" dir=\"auto\" spellcheck=\"false\" href=\"https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbUIyOTlRN1huZHVHck5xR2NLYlJ1MXEyc1ctUXxBQ3Jtc0tuTkl5RWJfQTdLY2VIcWZYaXlEWnhXbG9MbEQ3X2lsdzJ6N2hLOVNhQ1l0b2FqbGlNRGJuWkpGX2FTZzNzQjBKc0FZSXp6dFdWNG5SYURPcEx6ajBNbmRwTXJ3OHpmSEtxU2JybXJhLXJvQ1BfZWdRYw&amp;q=https%3A%2F%2Fforum.ircam.fr%2Fprojects%2Fdetail%2Fmelodic-scale%2F\" target=\"_blank\" rel=\"nofollow noopener\">https://forum.ircam.fr/projects/detai...</a><span class=\"style-scope yt-formatted-string\" dir=\"auto\"> Subscribe to IRCAM Forum Premium Membership: </span><a class=\"yt-simple-endpoint style-scope yt-formatted-string\" dir=\"auto\" spellcheck=\"false\" href=\"https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbFBpUDRfYUtDX05lMTR3N1NqamRoRXdwYjk0d3xBQ3Jtc0trM1RHOUhFbExPQWhKLTFqYWp1d2JWcFZOalZMN0E5NkxpNzRRZUlBN1NycE9pMmFjcnVYUFVUUEU2cEVGSFliLTdRb2xIUVRqOG1CUGhHMFpfNWxVdWpWbVEwTUh1LVlCeWhzVThrRXdIRW5yWWNRcw&amp;q=https%3A%2F%2Fwww.ircam.fr%2Finnovations%2Fabonnements-du-forum%2F\" target=\"_blank\" rel=\"nofollow noopener\">https://www.ircam.fr/innovations/abon...</a><span class=\"style-scope yt-formatted-string\" dir=\"auto\"> Subscribe to the next webinar (14/12/2020): </span><a class=\"yt-simple-endpoint style-scope yt-formatted-string\" dir=\"auto\" spellcheck=\"false\" href=\"https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbTdfQTB0X2EtNXctMUJwdWRLZ24tdkdtZ1F6UXxBQ3Jtc0tsVlY3Z3ZwVU5xVlpBS3dfZlNFaldGSVJwOUdESTFWVEl4QkNWN2tDWkZSS0duV0lBT0otS3oyOU1ib21ZcFRqQ0tjZ0FQRVFnQ3AyQ0dFZFBlcE96U1UxSWlPaHQ1blVLbU1IX093eUdXaW5ybk9WTQ&amp;q=https%3A%2F%2Fforum.ircam.fr%2Fagenda%2Fwebinaire-melodic-scale-anime-par-greg-beller%2Fdetail%2F\" target=\"_blank\" rel=\"nofollow noopener\">https://forum.ircam.fr/agenda/webinai...</a><span class=\"style-scope yt-formatted-string\" dir=\"auto\"> Greg Beller, Product Manager @IRCAM Forum Melodic Scale is a Max For Live device that automatically modifies a melodic line in real time, by changing its scale, mode or temperament. In the studio or on stage, Melodic Scale allows singers to correct the pitch and change the vibrato of their voice, but also to sing in unusual modes and temperaments. Based on SuperVP technology, Melodic Scale transforms a melody without adding a vocoder effect and offers an alternative to autotune&trade;, for use in a wide variety of musical styles. </span><a class=\"yt-simple-endpoint style-scope yt-formatted-string\" dir=\"auto\" spellcheck=\"false\" href=\"https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbnlFbDhtQWhDbU5vcmtRQTBmeFg1ajV2a2pIZ3xBQ3Jtc0trV0xSa19ZQmU3MktTeTQybnhyWTJjNC1UcE9Jb3hEdGEwVXNRSWEzWEEyckVzZXlKeG1qcEhNbmlZSW80MHQ5cVoxNHF2aWdfUnRWRjcxa1R6eklURkZGbzZZOFFHWDlLOGtoWWI4SFNmVFhXZmJkdw&amp;q=https%3A%2F%2Fforum.ircam.fr%2F\" target=\"_blank\" rel=\"nofollow noopener\">https://forum.ircam.fr/</a> <a class=\"yt-simple-endpoint style-scope yt-formatted-string\" dir=\"auto\" spellcheck=\"false\" href=\"https://www.youtube.com/redirect?event=video_description&amp;redir_token=QUFFLUhqbVNMczRQN05SWnNNeFNQNzloLXpUMXQ3RGFRQXxBQ3Jtc0ttamxvQWhEMXdfd05zV1pDRVpvU3pnTlJDcHVETF9vbFpCSWxLdWswWGFKUWlkNzVEWklNZ05tT0hCTFltZElKaDVrM3pEM3A2cDYybUlHUXR6UkJaVjl3UElSektuQTVGaXpfSnR0aE9BTGlmak5JMA&amp;q=https%3A%2F%2Fwww.ircam.fr%2F\" target=\"_blank\" rel=\"nofollow noopener\">https://www.ircam.fr/</a></p>",
        "topics": [],
        "user": {
            "pk": 24441,
            "forum_user": {
                "id": 24414,
                "user": 24441,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/c5a5fda0f007b20ac42975ad4ae78a00?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nyalreal24",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ircam-tutorials-melodic-scale-max-for-live-deviceirca",
        "pk": 990,
        "published": false,
        "publish_date": "2021-09-29T16:38:54.540611+02:00"
    },
    {
        "title": "xx88xcncom",
        "description": "xx88xcncom",
        "content": "<p><a href=\"https://xx88x.cn.com/\">xx88</a> l&agrave; nền tảng giải tr&iacute; trực tuyến mang đến cho người d&ugrave;ng nhiều lựa chọn ph&ugrave; hợp với nhu cầu thư gi&atilde;n hiện đại. Khi truy cập, người d&ugrave;ng c&oacute; thể tiếp cận c&aacute;c nội dung như thể thao, tr&ograve; chơi trực tuyến, game b&agrave;i v&agrave; nhiều h&igrave;nh thức giải tr&iacute; phổ biến kh&aacute;c. Điểm đ&aacute;ng ch&uacute; &yacute; nằm ở c&aacute;ch bố tr&iacute; giao diện r&otilde; r&agrave;ng, gi&uacute;p thao t&aacute;c trở n&ecirc;n đơn giản v&agrave; dễ l&agrave;m quen ngay từ lần đầu sử dụng. Kh&ocirc;ng cần mất qu&aacute; nhiều thời gian t&igrave;m hiểu, người d&ugrave;ng vẫn c&oacute; thể nhanh ch&oacute;ng nắm bắt c&aacute;ch hoạt động v&agrave; lựa chọn nội dung ph&ugrave; hợp với sở th&iacute;ch c&aacute; nh&acirc;n. B&ecirc;n cạnh đ&oacute;, hệ thống được tối ưu để đảm bảo tốc độ truy cập ổn định, hạn chế t&igrave;nh trạng gi&aacute;n đoạn trong qu&aacute; tr&igrave;nh trải nghiệm. Điều n&agrave;y g&oacute;p phần tạo cảm gi&aacute;c liền mạch v&agrave; thoải m&aacute;i khi sử dụng trong thời gian d&agrave;i. Nền tảng cũng hỗ trợ hoạt động tr&ecirc;n nhiều thiết bị kh&aacute;c nhau, từ điện thoại đến m&aacute;y t&iacute;nh, gi&uacute;p người d&ugrave;ng linh hoạt hơn trong việc truy cập bất cứ l&uacute;c n&agrave;o. Một yếu tố quan trọng kh&aacute;c l&agrave; vấn đề bảo mật, khi c&aacute;c th&ocirc;ng tin cơ bản đều được ch&uacute; trọng bảo vệ nhằm mang lại sự an t&acirc;m trong qu&aacute; tr&igrave;nh sử dụng. Ngo&agrave;i ra, c&aacute;c chức năng li&ecirc;n quan đến giao dịch được thiết kế theo hướng đơn giản v&agrave; dễ thao t&aacute;c, gi&uacute;p người d&ugrave;ng tiết kiệm thời gian. Dịch vụ hỗ trợ cũng đ&oacute;ng vai tr&ograve; quan trọng trong việc n&acirc;ng cao trải nghiệm, với khả năng phản hồi nhanh v&agrave; hỗ trợ khi cần thiết. Nội dung tr&ecirc;n nền tảng được cập nhật theo xu hướng, gi&uacute;p duy tr&igrave; sự mới mẻ v&agrave; tr&aacute;nh cảm gi&aacute;c lặp lại. Tuy vậy, người d&ugrave;ng vẫn n&ecirc;n chủ động t&igrave;m hiểu th&ocirc;ng tin trước khi tham gia, đồng thời c&acirc;n đối thời gian hợp l&yacute; để đảm bảo trải nghiệm t&iacute;ch cực. Trong bối cảnh c&aacute;c dịch vụ trực tuyến ng&agrave;y c&agrave;ng ph&aacute;t triển, việc lựa chọn một nền tảng c&oacute; trải nghiệm ổn định, dễ sử dụng v&agrave; ph&ugrave; hợp với nhu cầu c&aacute; nh&acirc;n sẽ gi&uacute;p n&acirc;ng cao chất lượng giải tr&iacute; h&agrave;ng ng&agrave;y. Với những yếu tố đ&oacute;, nền tảng n&agrave;y đang dần tạo được sự ch&uacute; &yacute; v&agrave; trở th&agrave;nh một lựa chọn ph&ugrave; hợp đối với nhiều người d&ugrave;ng hiện nay.</p>",
        "topics": [
            {
                "id": 320,
                "name": "Hacker",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166556,
            "forum_user": {
                "id": 166319,
                "user": 166556,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4253767d848a453a74b300e5cc5e2383?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-04T06:46:39.078345+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "xx88xcncom",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "xx88xcncom",
        "pk": 4590,
        "published": false,
        "publish_date": "2026-04-04T06:48:58.050474+02:00"
    },
    {
        "title": "Methods for Procedural Design of Spatial Reverb by John Burnett and Benoit Alary",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/1.png\" alt=\"\" width=\"640\" height=\"387\" />&nbsp;<img src=\"/media/uploads/2.png\" alt=\"\" width=\"640\" height=\"387\" /></div>\r\n<div class=\"c-content__button\">Presented by John Burnett, Benoit Alary</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/burnett/\" target=\"_blank\">Biography of John Burnett</a></div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/balary/\" target=\"_blank\">Biography of Benoit Alary</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p>For the 2025 IRCAM Forum, we are proposing both a lecture and a demo session which will explore procedural design methods for spatial reverberation in artistic creation. The presentation will center around two recent technologies from the IRCAM EAC group: the Elliptique reverb and the Elliptique Viewer, a new visualization and control interface. Using these tools, the presentation will establish a language for working with and sculpting spatial reverberation and examine the application of these ideas in the context of multichannel audio and ambisonic spatialization. The main focus of this presentation is on visualization as well as various means of procedurally generating reverberation profiles using the Elliptique Viewer. One such method is via a room model with various real-time parameters such as room dimensions and wall absorption. The viewer also allows for more abstract methods of generation, such as using 3D spatial noise to distribute decay profiles around the listener. Special attention will also be payed to means of transforming the sound field over time, giving the impression of a room with shifting dimensions or reverberant spaces with impossible geometries and acoustic properties. We intend to accompany the lecture with a demonstration of various design approaches and procedural generation methods within a multichannel speaker array.</p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/headshot.jpeg\" alt=\"\" width=\"480\" height=\"480\" /></p>",
        "topics": [],
        "user": {
            "pk": 88046,
            "forum_user": {
                "id": 87942,
                "user": 88046,
                "first_name": "John",
                "last_name": "Burnett",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/0c2fa9b72424d2b561e7e1a332be2fe3?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-05-16T00:07:15.164014+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 966,
                        "forum_user": 87942,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "burnett",
            "first_name": "John",
            "last_name": "Burnett",
            "bookmarks": []
        },
        "slug": "methods-for-procedural-design-of-spatial-reverb-by-john-burnett-and-benoit-alary",
        "pk": 3343,
        "published": true,
        "publish_date": "2025-03-10T12:33:08+01:00"
    },
    {
        "title": "rhythm changes",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p>Gunnar Geisse<br /><strong>rhythm changes </strong>(2020, 27&rsquo;43'')<br /><em>Sonata for solo laptop guitar</em><br />Digital pre-structured improvisation for electric guitar and interactive computer setup, post-edited montage of the recorded improvisations for the present fixed media version.</p>\r\n<p><br /><span style=\"text-decoration: underline;\">Quotations</span><br /><sup>1</sup> <em>Anthropology</em>, Charlie Parker<br /><sup>2</sup> &bdquo;Was ich an meinen Bildern mag, ist, dass es nur so aussieht, als seien sie real. [&hellip;] Man sp&uuml;rt, dass das, was man sieht, nicht deckungsgleich ist mit der Wahrheit. Nicht vollkommen.&ldquo;<br />Jeff Wall, <em>S&uuml;ddeutsche Zeitung,</em> interview with Holger Liebs (2003, May 24/25)<br /><sup>3</sup> &bdquo;The tune starts like this, and then it goes wherever it goes, but it's about [how that one first phrase is developed.] &hellip; What's the sense of what's coming next? [Like to me that, you know, the greatest solos, like] you don't see it, you don't hear it coming, but then once it's there it's like &hellip;&ldquo;<br />UNO Jazz Studies Program: November 5, 2014 - Peter Bernstein (2020, January 21), [YouTube] <a href=\"https://www.youtube.com/watch?v=QOucyHguEGE\">https://www.youtube.com/watch?v=QOucyHguEGE</a>, 14:31-14:35 and 27:02-27:12.</p>\r\n<p><br />Live version:<br /><a href=\"https://youtu.be/MUj4wa7cHyU?t=1980\">https://youtu.be/MUj4wa7cHyU?t=1980</a><br />or<br /><a href=\"https://drive.google.com/file/d/15Pt10V8fcXJLHeefMR0iJFvor4T4sZuX/view?usp=sharing\">https://drive.google.com/file/d/15Pt10V8fcXJLHeefMR0iJFvor4T4sZuX/view?usp=sharing</a></p>\r\n<p>&nbsp;</p>\r\n<p><em>THANK YOU ALL, Gunnar</em></p>",
        "topics": [
            {
                "id": 893,
                "name": "laptop guitar",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 31410,
            "forum_user": {
                "id": 31362,
                "user": 31410,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Gunnar_Geisse_thn3.jpeg",
                "avatar_url": "/media/cache/e2/d2/e2d296c770e78922d08de3e0d208d034.jpg",
                "biography": "I developed an instrument that I call \"laptop guitar\", an extension of my former principal instrument, the electric guitar with the computer. This enables me to continue analogue playing on the digital level. Alongside signal processing I especially use – based on the spectral characteristics of the original signal – the software-supported realtime conversion of audio into MIDI data in order to control virtual instruments and samplers. It is irrelevant which type of audio signal serves as a source: it can be the electric guitar, or speech or noise; it is even feasible to “translate” music into other music in this manner. \n  Today I'm using my laptop guitar both as an improvisatory instrument and as a production tool. My work as a composer-performer is documented on around 40 CDs and in 30 radio plays. I just finished a piece about Arnold Schoenberg for the Villa Aurora in Los Angeles and currently working on an electronic transformation of the Debussy and a Bartók string quartet for the Munich Philharmonic. Besides my work with symphony orchestras, I love to play improvised, electronic, and experimental music, solo and in collaborations with colleagues all around the world.",
                "date_modified": "2026-02-27T10:42:29.632741+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "gunnar",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "rhythm-changes",
        "pk": 1295,
        "published": true,
        "publish_date": "2022-09-05T15:11:44+02:00"
    },
    {
        "title": "OpenMusic 7.2 released",
        "description": "OpenMusic 7.2 released\r\nRELEASE NOTES 7.2",
        "content": "<p>OpenMusic 7.2 released</p>\r\n<p></p>\r\n<p>OPERATING SYSTEMS</p>\r\n<p>MacOS: 64bits - ARM and Intel processors,</p>\r\n<p>WINDOWS: 32 bits</p>\r\n<p>LINUX: 64 bits RPM and DEB packages, tar-ball</p>\r\n<p>&nbsp;</p>\r\n<p>* RELEASE NOTES 7.2</p>\r\n<p>NEW FEATURES</p>\r\n<p>- fluidsynth player<br />- ascii-&gt;string, string-&gt;ascii<br />- Svg export of n-cercle objects</p>\r\n<p>IMPROVEMENTS</p>\r\n<p>- Markers info window offset settings<br />- Selection in POLY of VOICES<br />- SOUND objects have score-actions and plays in scorepatches<br />- tab completion (J. Jakes-Schauer)</p>\r\n<p>FIXES</p>\r\n<p>- fixed objfromobjs multi-seq-&gt;poly ports are preserved<br />- Now tempobj has the correct and same offset as the marker attached to<br />- score-actions now are saved in scorepatch<br />- Dynamic tempo now works.<br />- graphicports array-class (cocoa fix)<br />- graphicports maquette lock (cocoa fix)<br />- Now \"Save as\" works in the textfile inteface</p>\r\n<p></p>\r\n<p>Here is a howto for fluid and OM:</p>\r\n<p>https://openmusic-project.github.io/openmusic/doc/fluid</p>\r\n<p></p>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -56px; top: 311px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 386,
                "name": "Composition strategies",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 224,
                "name": "Computer-aided composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 253,
                "name": "Composition Assistée par Ordinateur",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 955,
                "name": "Computer Assisted Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 311,
                "name": "Om",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 14,
            "forum_user": {
                "id": 14,
                "user": 14,
                "first_name": "Karim",
                "last_name": "Haddad",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1f556229c0742ef0586dd43d312f81a4?s=120&d=retro",
                "biography": "Karim Haddad was born in 1962 in Beirut Lebanon. He studied at the national conservatory there until it closed its doors in 1975 due to the civil war. He then went on to study philosophy and literature. Haddad received six awards from the CNSMD de Paris in addition to the Diplôme Supérieur de Composition with honors. He has worked with composers such as A.Bancquart, P. Mefano, K. Huber, and Emmanuel Nunes. This learning period is marked by his keen interest for non-tempered spaces and their strong relationship with temporal poetry. In 1992 and 1994 he took part in Ferienkursen für Musik in Darmstadt where he received a scholarship. In 1995, he took a class in computer music at IRCAM, and from that point on, the computer became the only tool he used for the elaboration of his works.\r\n\r\nAs a computer music expert, and more particularly as an expert in computer-assisted composition, in 2000 he is given the responsibility of technical support for the IRCAM Forum. He has developed several tools for the OpenMusic environment (synthesis control via Csound), as well as interfaces between this environment and score editors such as Finale and Lilypond.",
                "date_modified": "2026-02-18T11:08:17.096351+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 3,
                        "forum_user": 14,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 0,
                        "keys": [
                            {
                                "id": 544,
                                "membership": 3
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "haddad",
            "first_name": "Karim",
            "last_name": "Haddad",
            "bookmarks": []
        },
        "slug": "openmusic-72-released",
        "pk": 2199,
        "published": false,
        "publish_date": "2023-04-09T17:35:32+02:00"
    },
    {
        "title": "C-LAB Taiwan Sound Lab: Diverse Technological Applications in Sound Art by Cécile HUANG & CHENG Yung-Hsin",
        "description": "Technical implementations of three interactive sound installations from the 2025 C-LAB Sound Festival: DIVERSONICS — spanning WiFi-synchronized kinetic sound, real-time generative spatialization, and audio-driven visual systems.",
        "content": "<p>This session presents the technical implementations of three sound installations from the exhibition part at the 2025 C-LAB Sound Festival: DIVERSONICS.</p>\r\n<p>&nbsp;</p>\r\n<p>➡️ This presentation is part of <a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains March 2026</a></p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"The Price of War _ Photo by Aquamarinefilm\" src=\"https://forum.ircam.fr/media/uploads/user/021579f04f396c0a3173d28365674fce.jpg\" /></p>\r\n<p><strong>The Price of War &mdash; SHYU Ruey-Shiann, LIN Ge-Wei, LIN Chao-Yu, DEAN Chi-You</strong></p>\r\n<p>A WiFi-controlled network of 100 baby strollers, each driven by an Arduino-based MCU, wireless module, and MP3 player, all time-synchronized with ambient tracks containing precisely placed explosion cues. A central host coordinates two playback modes: Auto Mode (fixed volume) and Push Mode, where a rotary encoder detects movement to trigger volume fade-in and fade-out.</p>\r\n<p><strong>&nbsp;</strong></p>\r\n<p><strong><img alt=\"Tender Soul of Ocean: recall _ Photo by Aquamarinefilm\" src=\"https://forum.ircam.fr/media/uploads/user/a030b61fc60c7019606d4f53ce6ee7d3.jpg\" /></strong></p>\r\n<p><strong>Tender Soul of Ocean: recall &mdash; WHYIXD &amp; KLING KLANG KLONG</strong></p>\r\n<p>A custom generative sound engine controlled via OSC cross-references three live data streams &mdash; wind parameters, infrared-tracked audience movement, and the sculpture's light engine &mdash; to continuously shape the sonic environment. Spatialization uses Ambisonics, optionally decoded with IRCAM's Spat (Max/MSP), and algorithmic composition ensures no two experiences repeat.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"DRIFT IN TIME _ Photo by Aquamarinefilm\" src=\"https://forum.ircam.fr/media/uploads/user/0670c1d3eb0a514d76334cbfca9bacbb.jpg\" /></p>\r\n<p><strong>DRIFT IN TIME &mdash; ULTRACOMBOS, Cicada, LIN Yu-De</strong></p>\r\n<p>Audio is analyzed in real-time, extracting spectral and timbral features that are transmitted via OSC to drive generative visual rendering. This session will focus on the live performance version, in which sound and image are structurally coupled and continuously shaped by the ensemble's playing.</p>\r\n<p>&nbsp;</p>\r\n<p>Also presented in this C-LAB session: <a href=\"https://forum.ircam.fr/article/detail/reciters-by-po-hao-chi-taiwan-1/\">Reciter(s) by Po-Hao Chi</a> &mdash; a distributed sound performance also featured in 2025 DIVERSONICS.</p>\r\n<p>&nbsp;</p>\r\n<p>Photography by Aquamarinefilm</p>",
        "topics": [
            {
                "id": 4314,
                "name": "generative system",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4379,
                "name": "interactive installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 950,
                "name": "OSC ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 39542,
            "forum_user": {
                "id": 39488,
                "user": 39542,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/327006225_875940203643203_1455195321353496755_n.jpg",
                "avatar_url": "/media/cache/03/82/03821466af8e5260cab8db7be3b2db84.jpg",
                "biography": "",
                "date_modified": "2026-03-05T02:59:10.479070+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 452,
                        "forum_user": 39488,
                        "date_start": "2023-06-16",
                        "date_end": "2026-10-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 73,
                                "membership": 452
                            },
                            {
                                "id": 196,
                                "membership": 452
                            },
                            {
                                "id": 216,
                                "membership": 452
                            },
                            {
                                "id": 766,
                                "membership": 452
                            },
                            {
                                "id": 1159,
                                "membership": 452
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "tslclab",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 132,
                    "user": 39542,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "c-lab-taiwan-sound-lab-diverse-technological-applications-in-sound-art-by-cecile-huang-cheng-yung-hsin",
        "pk": 4457,
        "published": true,
        "publish_date": "2026-03-05T07:55:48+01:00"
    },
    {
        "title": "Une archive urbaine comme un jardin anglais - Environnement acoustique dans le temps et dans l'espace",
        "description": "Résidence en recherche artistique 2018.19.\r\nBrynjar Franzson Davíð.\r\nEn collaboration avec l’équipe Espaces acoustiques et cognitifs de l’Ircam-STMS et le ZKM.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">R&eacute;sidence en recherche artistique 2018.19</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p><strong>Une archive urbaine comme un jardin anglais - Environnement acoustique dans le temps et dans l'espace.</strong><br />En collaboration avec l&rsquo;&eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac/\">Espaces acoustiques et cognitifs</a><span>&nbsp;</span>de l&rsquo;Ircam-STMS et le ZKM.</p>\r\n<p>Ce projet explore &agrave; la fois la mani&egrave;re dont des espaces artificiels &laquo; acoustiques &raquo; peuvent &ecirc;tre construits &agrave; partir d'impulsions de r&eacute;sonance synth&eacute;tiques, comment ces r&eacute;sonances - et les caract&eacute;ristiques de r&eacute;sonance d'un espace - peuvent varier dans le temps et l'espace, et comment le placement et le comportement temporel de ces r&eacute;sonances peut &ecirc;tre trait&eacute; comme un outil de composition.</p>\r\n<p>Les r&eacute;sonances sont plac&eacute;es dans une grille de haut-parleurs arbitrairement con&ccedil;ues pour que le spectateur puisse fl&acirc;ner, en prenant le contr&ocirc;le de ses propres exp&eacute;riences en explorant les caract&eacute;ristiques acoustiques de l'espace et la relation entre le son instrumental et la r&eacute;sonance spatiale, produisant une exp&eacute;rience d'&eacute;coute immersive.</p>\r\n<p>Imaginez-vous dans le paysage sonore d'un petit jardin clos. L'artiste produit un multiphonique. Le son - &eacute;tendu avec des lignes &agrave; retard et la synth&egrave;se granulaire - se d&eacute;place de l'instrument &agrave; l'espace. En se d&eacute;pla&ccedil;ant, diff&eacute;rentes parties de l'espace r&eacute;pondent aux diff&eacute;rentes parties du multiphonique. Quelques-uns des hauts partiels r&eacute;sonnent &agrave; votre gauche, tandis que les partiels bas se dirigent vers vous, &agrave; travers vous, derri&egrave;re vous. Un peu plus tard, d'autres parties des hauts partiels apparaissent &agrave; votre droite - les hauts partiels &agrave; gauche et &agrave; droite oscillent d'avant en arri&egrave;re.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Brynjar Franzson Dav&iacute;&eth;</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202019/.thumbnails/david_franzson.jpg/david_franzson-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Compositeur d'origine islandaise, Dav&iacute;&eth; Brynjar Franzson vit et travaille &agrave; New York. Son album<span>&nbsp;</span><em>The Negotiation of Context</em>, produit avec l'ensemble Yarn/Wire, a r&eacute;cemment &eacute;t&eacute; publi&eacute; par la maison de disques WERGO. Il a collabor&eacute; avec le BBC Scottish Symphony Orchestra et le fonds de commande de la radio nationale islandaise en composant un concerto pour violoncelle,<span>&nbsp;</span><em>on Matter and Materiality</em><span>&nbsp;</span>; avec l'artiste Angela Rawlings bas&eacute;e &agrave; Reykjavik et l'ensemble berlinois Adapter pour<span>&nbsp;</span><em>longitude</em>, une installation/op&eacute;ra; ou encore, avec gnarwhallaby, Vicky Chow, Mariel Roberts, Matt Barbier et Weston Olencki, Matthias Engler et Ingolfur Vilhjalmsson pour<span>&nbsp;</span><em>the Cartography of Time</em>, une exploration permanente de l'exp&eacute;rience du temps.</p>\r\n<p><em>The Negotiation of Context</em><span>&nbsp;</span>a &eacute;t&eacute; s&eacute;lectionn&eacute;e par Wire comme l'un des dix meilleurs albums contemporains en 2014. Le<span>&nbsp;</span><em>New York Times</em><span>&nbsp;</span>le d&eacute;crit comme un &laquo; engageant tactile &raquo;, Wire comme &laquo; convaincant &raquo; et<span>&nbsp;</span><em>Gramophone</em><span>&nbsp;</span>comme &laquo; art sonore qui va clairement quelque part &raquo;.</p>\r\n<p>Dav&iacute;&eth; Brynjar Franzson travaille actuellement sur de nouvelles &oelig;uvres pour +/-, pour l'Ensemble Chromoson ainsi que sur un nouveau projet &agrave; grande &eacute;chelle avec Yarn/Wire. Il co-dirige Carrier Records, un label de musique nouvelle et exp&eacute;rimentale, avec Sam Pluta et Jeff Snyder.</p>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://franzson.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://franzson.com/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27,
            "forum_user": {
                "id": 27,
                "user": 27,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ca09108239e42e779637df57c89a8cce?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-09-22T04:12:11.777880+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "franzson",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "une-archive-urbaine-comme-un-jardin-anglais-environnement-acoustique-dans-le-temps-et-dans-lespace",
        "pk": 22,
        "published": true,
        "publish_date": "2019-03-21T15:38:11+01:00"
    },
    {
        "title": "Tutoriel Modalys n°4 : The Reed of Distress",
        "description": "Quatrième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p style=\"text-align: justify;\"><strong>Dans ce tutoriel, nous poursuivons notre voyage avec la connexion &agrave; roseau en utilisant une plaque rectangulaire comme roseau et en attachant un trou au tube, qui peut &ecirc;tre ouvert et ferm&eacute;.</strong></p>\r\n<p style=\"text-align: justify;\">C'&eacute;tait difficile :-) mais j'aime les d&eacute;fis. La documentation sur la connexion &agrave; roseau plante une graine de confusion qui pousse facilement avec des r&eacute;sultats inattendus.</p>\r\n<h6 style=\"text-align: justify;\"></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/D8wQu7F-l1U\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: justify;\">Je pense qu'il est plus important que jamais&nbsp;de dire ici que je n'ai jamais particip&eacute; &agrave; un cours Modalys &agrave; l'IRCAM. J'aurais peut-&ecirc;tre eu plus d'informations si je l'avais fait, mais jusqu'&agrave; pr&eacute;sent, ma connaissance de Modalys est uniquement bas&eacute;e sur la documentation et certaines vid&eacute;os que j'ai vues en ligne. Je suis tr&egrave;s reconnaissant pour tout soutien, conseils, aide dans la section des commentaires.</p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.&nbsp;</strong></p>",
        "topics": [
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 133,
                "name": "Sound synthesis and treatment",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n4-the-reed-of-distress",
        "pk": 726,
        "published": true,
        "publish_date": "2020-09-01T10:00:00+02:00"
    },
    {
        "title": "Integrating LLM-Based Tools into Computer-Assisted Composition Workflows in OpenMusic by Dr. Alex Buck.",
        "description": "This talk presents a computer-assisted composition workflow developed through the integration of LLM-based tools into the OpenMusic environment, culminating in the creation of a custom OpenMusic library named after Studio PANaroma — one of the leading centers for electroacoustic music research in Latin America. With the support of LLMs, I translated into algorithms the musical ideas and techniques developed during the composition of a piece that combines John Cage’s form-building strategies — particularly his use of rhythmic structures — with control procedures drawn from integral serialism. The presentation reflects on the creative process, highlighting how LLMs were used not as a replacement for compositional thought, but as collaborative agents in the formalization of complex musical concepts.",
        "content": "<h1>INTEGRATING LLM-BASED TOOLS INTO COMPUTER-ASSISTED COMPOSITION WORKFLOWS IN OPENMUSIC</h1>\r\n<p><strong>1. OVERVIEW</strong></p>\r\n<p>This proposal presents a computer-assisted composition workflow that integrates Large Language Model (LLM)-based tools with the OpenMusic environment through the use of the Lisp programming language. In this approach, composers can generate functional code and design algorithmic musical structures by interacting with LLMs via natural language prompts, enhancing both accessibility and flexibility in contemporary compositional practices. However, it is essential to stress two points:</p>\r\n<p>i) This workflow does<strong> </strong>not replace the creative act: the composer remains responsible for conceiving the core musical ideas that serve as the basis for algorithmic formalization;</p>\r\n<p>ii) LLMs are still prone to frequent errors. Thus, a solid programming background remains necessary to critically evaluate, debug, and iteratively refine the code generated by the model.</p>\r\n<p><br />In my presentation, I will illustrate this process by tracing the origins of my musical ideas &mdash; grounded in techniques and approaches I have studied from serialist and postserialist composers such as K. Stockhausen, F. Menezes, and B. Ferneyhough &mdash; and how I have adapted them to my own artistic language. I will present a recent composition that explores symbolic relationships between the poetics of John Cage, Clarice Lispector, and Zen Buddhism, drawing materials from texts, musical works, and sound recordings. For this piece, I developed a set of algorithms that enable transductions &mdash; or translations &mdash; between smooth time (real-time in seconds) and striated time (musical metric structures), as well as between text, musical notes, and durations. Additional algorithms allowed me to interrelate and translate heterogeneous data from texts, sounds, and musical structures, creating a compositional space where distinct formats and temporalities intersect.</p>\r\n<p>Finally, I will reflect on the role of both my programming skills and limitations in shaping the creative process, and demonstrate how LLM tools were incorporated when appropriate, highlighting their potential and boundaries within artistic practice.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>2. KEY COMPONENTS AND TECHNIQUES</strong></p>\r\n<p><br /><strong>I. Overall Structure</strong><br />After introducing myself, the first part of this presentation will briefly introduce the composition that serves as the context for this research, outlining the conceptual and technical foundations that informed its design. In addition to the symbolic references to Cage, Clarice Lispector, and Zen philosophy, the piece draws heavily on historical techniques of material generation and formal structuring. These include early serialist procedures, such as Stockhausen's methods for control and material selection based on numeric profiles, as exemplified in Studie II and Gesang der Jünglinge; complex rhythmic manipulation strategies inspired by Bryan Ferneyhough, as discussed by Mikhail Malt in The Composition of Complex Rhythms (IRCAM, 2000); and pitch transformation techniques developed by Brazilian composer Flo Menezes, notably his concept of retroactive interlock. Together, these influences form the methodological basis from which the integration of LLMbased tools into the compositional process emerges.</p>\r\n<p><br /><strong>II. Algorithmic Tools Development</strong></p>\r\n<p>The second part of this presentation will focus on a selection of algorithms and symbolic systems I designed specifically for this piece. These tools embody the<br />conceptual and structural principles discussed earlier, translating them into programmable, generative processes that operate within a computer-assisted composition workflow.&nbsp;</p>\r\n<p>The objects-functions I will present are:</p>\r\n<p style=\"padding-left: 40px;\"><br />a. <span style=\"text-decoration: underline;\">Seq-Metric-Alea</span> (Metric Sequence Generator)<br />Given a target duration in seconds and a base BPM, this algorithm<br />generates a random sequence of time signatures by combining<br />numerators and denominators from user-defined lists. The BPM<br />defines the tempo context for calculating the duration of each<br />measure. Through an adjustment process, the result is a valid<br />combination of measures that adds up to exactly the specified<br />total time, offering both structural control and aleatoric variability.</p>\r\n<p style=\"padding-left: 40px;\"><br />ii. <span style=\"text-decoration: underline;\">Integer-Partition-Permutation</span><br />A method for generating rhythmic structures based on all possible<br />partitions of integers, combined with permutation strategies. This<br />approach allows for systematic yet varied generation of rhythmic<br />trees in OpenMusic, supporting a post-serial approach to rhythm<br />construction.</p>\r\n<p style=\"padding-left: 40px;\"><br />iii. <span style=\"text-decoration: underline;\">Alpha-&gt;Note &amp; Alpha-&gt;Interval</span> (Symbolic Modes)<br />This system establishes correspondences between letters of the<br />alphabet and precise pitches/intervals using midicent values. It<br />offers six distinct modes for interpreting the alphabet, including<br />chromatic, microtonal, and irregular mappings derived from<br />subdivisions of the alphabet. These modes allow the integration of<br />linguistic material into musical structures, enabling words or<br />phrases to generate pitch sequences.</p>\r\n<p style=\"padding-left: 40px;\"><br />iv. <span style=\"text-decoration: underline;\">Profile-&gt;abs-index</span><br />This system applies numerical profiles to control the selection<br />and reordering of elements within a dataset. It operates through<br />multiple modes that determine how profiles traverse or reorganize<br />the material, enabling both local permutations and global<br />structural variation. The method facilitates the systematic<br />generation of recurrent patterns, controlled randomness, or<br />formal development based on predefined sequences.</p>\r\n<p><br /><strong>3. CONCLUSION</strong><br />I will close this presentation by reflecting on the implications of incorporating LLMbased tools into compositional practice. While there is a clear risk that over-reliance on these models could discourage the effort required to formalize structures and understand the underlying code, there is also an alternative perspective. These tools, when approached critically, can function as interactive learning environments. Rather than replacing the composer's technical development, they can foster it.</p>\r\n<p></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 1989,
                "name": "artificial intelligence",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 253,
                "name": "Composition Assistée par Ordinateur",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 3239,
                "name": "electroacoustic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3238,
                "name": "LLM",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1735,
            "forum_user": {
                "id": 1733,
                "user": 1735,
                "first_name": "Alex",
                "last_name": "Buck",
                "avatar": "https://forum.ircam.fr/media/avatars/2018_AlexBuck.jpg",
                "avatar_url": "/media/cache/51/02/5102286e50667ed1c64debfd654335aa.jpg",
                "biography": "Composer-performer Alex Buck teaches Electroacoustic Composition at Studio PANaroma, one of the most respected centers for electroacoustic music in Latin America. He also coordinates the research group SOMNIUM which explores compositional practices grounded in sound-based approaches and digital technologies. His creative and academic work spans acousmatic composition, computer-assisted creation, semiotics, multichannel spatialization, pulse-based improvisation, and music technology. Buck’s research focuses on compositional processes mediated by digital tools, including artificial intelligence tools. He holds a bachelor’s and a Master’s in Electroacoustic Composition from UNESP, where he currently holds a full professor position. He earned his Doctor of Musical Arts (DMA) from the Performer-Composer program at the California Institute of the Arts. Buck has received several first-prize awards: Música Viva [2024], Destellos Electroacoustic Competition [2022], Prix Métamorphoses [2021], MusicWorks Electronic Music Composition Contest and Musica Nova [2019]. He has also received honorable mentions, including one at the ISAC International Sonosfera Ambisonics [2024].",
                "date_modified": "2025-09-28T10:52:13.720873+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Kantorowicz",
            "first_name": "Alex",
            "last_name": "Buck",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 236,
                    "user": 1735,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "integrating-llm-based-tools-into-computer-assisted-composition-workflows-in-openmusic-by-dr-alex-buck",
        "pk": 3597,
        "published": true,
        "publish_date": "2025-08-05T17:13:28+02:00"
    },
    {
        "title": "Improvisation Workshop for Voice & Electronics - ManiFeste-2023 academy",
        "description": "This workshop is intended for singers of all backgrounds—lyric artists and vocal performers—eager to learn both contemporary vocal languages and computer music. Participants will improvise together in voice/electronic duos and in groups, alternating between the role of an improv singer and an electronic performer.",
        "content": "<p><strong>MONDAY JUNE 19- SATURDAY, JUNE 24, 2023</strong></p>\r\n<p><em>ManiFeste, the IRCAM multidisciplinary festival and academy, is a gathering of creative artists in Paris, combining music with other disciplines: theater, dance, digital arts, and visual arts. The academy welcomes and trains many young composers, performers, and listeners from all over the world to benefit from our large-scale artistic and technological environment and a large public audience during the workshops. More details on<a href=\"https://www.ircam.fr/manifeste/academie\">&nbsp;www.ircam.fr</a></em></p>\r\n<p><em>The interpretation master classes associate 20th century repertoire and more recent creations in a desire to go beyond historical specializations. They also offer students special access to some of the most important works of mixed music, where the dimension of sound projection is an integral part of the performance</em></p>\r\n<p><strong>Educational Advisors: <a href=\"http://valerie-philippin.com/\">Val&eacute;rie Philippin</a> </strong>(singer), <a href=\"https://www.ircam.fr/article/realisateur-en-informatique-musicale-chargee-de-lenseignement-par-jean-lochard\"><strong>Jean Lochard</strong> </a>(IRCAM computer music designer and professor at IRCAM), <strong><a href=\"https://www.ircam.fr/person/mikhail-malt\">Mikhail Malt</a>&nbsp;(</strong>researcher and consultant, IRCAM Repmus team)</p>\r\n<p><img alt=\"ManiFeste-2021 Val&eacute;rie Philippin Voice Masterclass\" src=\"/media/uploads/user/addadba7a63181887d46baf1f0fc52f8.jpg\" /></p>\r\n<p><strong>This workshop is intended for singers of all backgrounds&mdash;lyric artists and vocal performers&mdash;eager to learn both contemporary vocal languages and computer music</strong>. Participants will improvise together in voice/electronic duos and in groups, alternating between the role of an improv singer and an electronic performer.<br /><br />Through guided improvisation games, participants will progressively test and interact with electronics (transformations, mixes, complex environments), and discover the latest improvisation software developed at IRCAM: OMAX, SOMAX2, and DYCI2.<br /><br />This workshop is a unique opportunity for participants to develop their vocal language, their knowledge of the sonic worlds offered by electronics, and their creativity through exploration and on-the-fly composition.<br /><br />Improvisation supports such as texts (from authors or from the students themselves), graphics, video, or other materials are welcome during the work sessions. Val&eacute;rie Philippin will also propose resources.<br /><br />The workshop will end with a final concert during the ManiFeste festival, open to the public.</p>\r\n<p><strong>APPLICATIONS</strong></p>\r\n<hr />\r\n<p><strong>Applicants must:</strong><br /><strong>-&nbsp;be born after January 1, 1988</strong><br />- not have participated twice before in another ManiFeste Academy workshop<br />- be able to speak and understand English or French<br /><br />Details and application online <a href=\"https://ulysses-network.eu/competitions/manifeste-2023-improvisation/\">on ULYSSES Platform </a><br /><strong>Deadline for applications Wednesday, February 22, 2023, 4pm CEST</strong></p>",
        "topics": [
            {
                "id": 1098,
                "name": "academy",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 824,
                "name": "France",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1100,
                "name": "June 2023",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1099,
                "name": "Paris",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1101,
                "name": "singer",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 22,
                "name": "Voice",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1102,
                "name": " voice performer",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1096,
                "name": "workshop",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17721,
            "forum_user": {
                "id": 17716,
                "user": 17721,
                "first_name": "Natacha",
                "last_name": "Moenne-Loccoz",
                "avatar": "https://forum.ircam.fr/media/avatars/1517-IRCAM-MANIF19--VISUEL-0-TheHouse1-Web.jpg",
                "avatar_url": "/media/cache/83/72/8372e1d360cd768ede652baeed45a1fb.jpg",
                "biography": null,
                "date_modified": "2024-12-12T15:36:41.115903+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 206,
                        "forum_user": 17716,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "moennelo",
            "first_name": "Natacha",
            "last_name": "Moenne-Loccoz",
            "bookmarks": []
        },
        "slug": "improvisation-voice-electronics-workshop-manifeste-2023-june-2023",
        "pk": 2025,
        "published": true,
        "publish_date": "2023-01-23T12:54:54+01:00"
    },
    {
        "title": "Somax 2 Tutorials",
        "description": "Cette page rassemble des tutoriels vidéo sur Somax2.",
        "content": "<h2><img src=\"/media/uploads/projects/images/Capture_d&eacute;cran_le_2023-03-30_&agrave;_11.52.37_217kqSo.png\" alt=\"\" width=\"218\" height=\"285\" />&nbsp;&nbsp;</h2>\r\n<h1>Tutorials</h1>\r\n<h2>Tutorial First Steps</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/6Azyt_5C6KQ\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Tutorial Build your Audio Corpus</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/p4nUd5pot4w\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Tutorial Max Tutorials</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/17Ilgbw9XN8\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Tutorial Performance Strategies&nbsp;</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/nCTW9QfeiHk\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Real-Time Corpus Recording</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/G5mgmvy5lNs\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Custom Labels and Save Presets</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/yq8hR4qi8yc\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h1>Demos</h1>\r\n<h2>Demo Mimetisms</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/LB7gyDnwnQ8\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Demo Full Interaction I</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/rl74iVJWFD8\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Demo Full Interaction II</h2>\r\n<p><iframe width=\"640\" height=\"431\" src=\"//player.vimeo.com/video/799305509?title=0&amp;amp;byline=0\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h2><br />Demo Full Interaction III</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/uFqo2HG0FNk\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Demo Installation Mode</h2>\r\n<h2><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/72wys4Xa0l4\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></h2>\r\n<h2>Demo Harmonizations</h2>\r\n<p><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/70wu7QMom_A\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<div>\r\n<p><span>________________</span></p>\r\n<p><span>Somax2 (c) Ircam 2020 -&nbsp;</span><br /><br /><span></span></p>\r\n<p>Somax2 est une version enti&egrave;rement renouvel&eacute;e du paradigme de co-improvisation r&eacute;active Somax, n&eacute; au sein de l'&eacute;quipe Repr&eacute;sentations Musicales de l'Ircam - STMS.</p>\r\n<p>Il s'inscrit dans le cadre des projets de recherche ANR MERCI (Mixed Musical Reality with Creative Instruments) et ERC REACH (Raising Co-creativity in Cyber-Human Musicianship), dirig&eacute;s par G&eacute;rard Assayag.</p>\r\n<p>D&eacute;veloppement de Somax2 par Joakim Borg, documentations et tutoriels par Joakim Borg et Marco Fiorini.</p>\r\n<p>Somax cr&eacute;&eacute; par G&eacute;rard Assayag et Laurent Bonnasse-Gahot, adaptations et pr&eacute;-version 2 par Axel Chemla Romeu Santos, prototype pr&eacute;liminaire par Olivier Delerue.</p>\r\n<p>Remerciements &agrave; Georges Bloch, Mikha&iuml;l Malt et Marco Fiorini pour leur expertise continue.</p>\r\n<p>Remerciements &agrave; Bernard Borron, Bernard Magnien, Carine Bonnefoy, Jo&euml;lle L&eacute;andre, Fabrizio Cassol et Marco Fiorini pour leur mat&eacute;riel musical utilis&eacute; dans le corpus de distribution Somax2.</p>\r\n<p>Plus d'informations, de contexte, de d&eacute;mos et de m&eacute;dias sur la page du projet Somax2&nbsp;:&nbsp;<span><a href=\"http://repmus.ircam.fr/somax2\">repmus.ircam.fr/somax2</a></span></p>\r\n<p><span><a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">https://forum.ircam.fr/projects/detail/somax-2/</a></span></p>\r\n</div>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 545,
                "name": "Repmus team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "somax-2",
        "pk": 2236,
        "published": true,
        "publish_date": "2023-05-09T13:12:21+02:00"
    },
    {
        "title": "Techniques for “subject-oriented” spatialized music by William Fastenow & Christopher Dobrian",
        "description": "This talk discusses composing in such a way that each listener intentionally receives a different, yet equally valid, distribution of the musical sounds.  It will highlight techniques such as \"spatial pointillism,\" in which contiguous nodes are not necessarily produced by contiguous speakers; and using combinatorics to map topology between conceptual musical space (the input) and the physical performance space (the output).",
        "content": "<h2></h2>\r\n<p>Most sound spatialization schemes assume an ideal listener location, a single &ldquo;sweet spot&rdquo; where the localization of sounds is perceived optimally. This implies that all other listener locations are sub-optimal, where listeners perceive an &ldquo;incorrect&rdquo; spatialization and balance. To address this problem, William Fastenow proposes what he terms subject-oriented music, in which each listener intentionally receives a different, yet equally valid, distribution of the musical sounds. This idea leads to the use of compositional concepts and techniques that deliberately produce multiple heard versions of a work, with variable timings and locations of sound events. In this presentation, Fastenow will describe a versatile multichannel system to create spatio-musical gestures in the listening field using pairwise speaker panning, with the express intention of giving each listener a unique musical experience. With collaboration from composer Christopher Dobrian, he will discuss some compositional techniques that they have found to be especially appropriate to this &ldquo;subject-oriented&rdquo; spatialization.</p>\r\n<p>In Fastenow&rsquo;s method, each loudspeaker in the multichannel system serves as a node, and each pairwise combination of speakers also produces intermediate virtual nodes, resulting in a large yet finite number of possible output mixes for every input sound object. Each sound object&rsquo;s virtual location moves through a series of those nodes, which might or might not correspond to the speakers closest to that location in the actual performance space. The result is a spatial pointillism, in which contiguous nodes are not necessarily produced by contiguous speakers. This effectively exploits gestalt perceptual principles, such that the music is heard in different versions from different listener vantage points. That, in turn, has led to the compositional use of combinatorics and a mapping topology between conceptual musical space (the input) and the physical performance space (the output). The presenter will discuss aesthetic considerations of these compositional and spatialization techniques in different types of performance spaces.</p>\r\n<p><img alt=\"1001 Charlies - Virtual Nodes\" src=\"https://forum.ircam.fr/media/uploads/user/61aefac817b5bec4b392eefdd8e0b93b.png\" /></p>\r\n<p><img alt=\"Nodes in Conceptual Four-Speaker Space\" src=\"https://forum.ircam.fr/media/uploads/user/57a7c6599b35b39a0ae8f875ac95b697.png\" /></p>\r\n<p><img alt=\"Straight Lines Form Parabolic Curve\" src=\"https://forum.ircam.fr/media/uploads/user/957c6fd50b096d9e107fe4f9dc5925c3.png\" /></p>",
        "topics": [
            {
                "id": 621,
                "name": "3daudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 386,
                "name": "Composition strategies",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2249,
                "name": "spatial",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1801,
                "name": "Spatialisation ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 42490,
            "forum_user": {
                "id": 42432,
                "user": 42490,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/wdf_headshot_smWeb-2.jpg",
                "avatar_url": "/media/cache/dd/cd/ddcd70602421510a28daf08fcd802005.jpg",
                "biography": "WILLIAM DAVID FASTENOW is a composer, performer, and arts technology entrepreneur based in San Juan Capistrano, CA and Brooklyn, NY.  He is a husband, father, and dog-father; the principal and owner of Park Boulevard Productions; strategic director for MorrowSoundⓇ; and is the Founding Director of Performance Technology for the Center for Innovation in the Arts, Associate Director for Beyond the Machine, and adjunct faculty at The Juilliard School.  His work most often involves spatial sound, transdisciplinary arts, and interactivity.  He enjoys figuring out how to make broken things work, and make complex things simple.  Recent projects include: Unfolding, commissioned by Mari Kimura for solo violin and electronics; Wave Music XII: 1001 Charlies, invited guest in Charlie Morrow’s Wave Music series, for conch and custom 26-speaker/1001-virtual node spatial music system; and Chiaroscuro-19, for telematic string quartet, modular synthesizer, and interactive dance.  As often as possible, he and his family enjoy exploring odd corners of the globe, finding new vistas, sounds, watering holes, and adventures.",
                "date_modified": "2024-12-12T19:40:56.875883+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 746,
                        "forum_user": 42432,
                        "date_start": "2024-02-22",
                        "date_end": "2025-02-22",
                        "type": 0,
                        "keys": [
                            {
                                "id": 311,
                                "membership": 746
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "williamfastenow",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "techniques-for-subject-oriented-spatialized-music",
        "pk": 3016,
        "published": true,
        "publish_date": "2024-10-06T21:07:09+02:00"
    },
    {
        "title": "Artificial Dream",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d77d38663600ab174fcf95b0df1b0b7e.png\"></p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9a8d3c5b028c14973dac9891b88bab77.jpg\"></p>\n<p>In a world where gender is becoming increasingly fluid and non-binary, we are left to ponder whether traditional gender inequalities will still exist in the future. As we explore the possibility of a new gender coding system, we question the nature of gender and its impact on society.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5cda0490c2d15b95b0c30259f3204ddb.png\"></p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3b103bba2a77f3d283484ba8cfd1fa62.jpg\"></p>\n<p>\"Artificial Dreams\" is a groundbreaking art project that utilizes virtual reality technology as its primary medium to create an immersive and interactive experience for the audience. The project provides a unique perspective on the operation mechanism of a 3-dimensional gender system, exploring the life experiences and inner world of the main character through a combination of reality and imagination.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f8d80dd1bb68c3669d4665d229d32aee.png\"></p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/872367a8126852a6833d9ce8c901e5a9.jpg\"></p>",
        "topics": [
            {
                "id": 1250,
                "name": "Immersive ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1211,
                "name": "narrative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1251,
                "name": " spacial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 936,
                "name": " VR",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32909,
            "forum_user": {
                "id": 32861,
                "user": 32909,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/cf13102e84138bfe25337c37fe182c09?s=120&d=retro",
                "biography": "Aijia Wang is an explorer of new media art.\n\nShe has been a student of Information Experience Design at the Royal College of Art since 2021. Aijia currently lives and works in Beijing and London. \n\nAijia specializes in collaborative interdisciplinary approaches to art creation, using different media and sensory channels to design experiences that help audiences empathize with different human things. She uses diverse technologies such as artificial intelligence to explore the symbiotic relationship between human and nonhuman species, engaging with nonhuman-centered design and speculative design practices.\n\nAijia's practice takes the forms of interactive installation, sound design, graphic design, projection art, and writing.",
                "date_modified": "2023-10-17T15:07:47.000581+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "aijiawang",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "artificial-dream-1",
        "pk": 2165,
        "published": true,
        "publish_date": "2023-03-27T12:59:40.431972+02:00"
    },
    {
        "title": "Networked performance as a space for collective creation",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;This presentation will examine the recent history of networked performance, in light of the multiple possibilities telematics offers for re-evaluating traditional notions of New Music and other repertoires. This practice-based research presentation will draw on examples from my own experience using JackTrip &ndash; in close collaboration with its developers at CCRMA &ndash; in directing and conducting student ensembles in projects that involve networked performance and co-creativity. Networked performance questions the distinction between the processes of writing/composing and improvisation, as the two are often interlinked in telematic performances. More specifically, telematics offers an ideal site for the practice of &lsquo;live composition&rsquo;, which de-hierarcherises the roles and social distributions often present in the structures of New Music practices. Blurring these roles incites rethinking the notion of the author; but not necessarily, however, in the manner of Foucault or of Barthes of seeing the author as the &lsquo;last signifier&rsquo;, which minimises the author&rsquo;s presence, and thus risks further invisibilising underrepresented authors. Rather, by potentially levelling-out the roles of performer, improvisor and composer in the distributed online space, telematics creates a fertile environment for new authorial practices to emerge. Telematic musical performances also bring new reflections to music technology itself, as they call into play questions of the nature of the network as a medium, an &lsquo;instrument&rsquo;, or a shared virtual acoustic space, as well as the roles of the participants within it. Making music online with near-zero latency calls for a fundamental rethinking of the potential of music technology to transform musical practice as such. In addition to overcoming, to a great extent, the barriers to synchronous collective music-making posed by the pandemic, and offering a space for the development of new repertoires as described above, it also engenders new opportunities for creating community internationally and presenting live music for international audiences. Reducing latency to near zero means that these collective musical practices may include a range of genres, ranging from chamber music from the Western classical repertoire to collective improvisation spanning continents.\\n&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">This practice-based research presentation will draw on examples from my own experience using JackTrip &ndash; in close collaboration with its developers at CCRMA &ndash; in directing and conducting student ensembles in projects that involve networked performance and co-creativity. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;This presentation will examine the recent history of networked performance, in light of the multiple possibilities telematics offers for re-evaluating traditional notions of New Music and other repertoires. This practice-based research presentation will draw on examples from my own experience using JackTrip &ndash; in close collaboration with its developers at CCRMA &ndash; in directing and conducting student ensembles in projects that involve networked performance and co-creativity. Networked performance questions the distinction between the processes of writing/composing and improvisation, as the two are often interlinked in telematic performances. More specifically, telematics offers an ideal site for the practice of &lsquo;live composition&rsquo;, which de-hierarcherises the roles and social distributions often present in the structures of New Music practices. Blurring these roles incites rethinking the notion of the author; but not necessarily, however, in the manner of Foucault or of Barthes of seeing the author as the &lsquo;last signifier&rsquo;, which minimises the author&rsquo;s presence, and thus risks further invisibilising underrepresented authors. Rather, by potentially levelling-out the roles of performer, improvisor and composer in the distributed online space, telematics creates a fertile environment for new authorial practices to emerge. Telematic musical performances also bring new reflections to music technology itself, as they call into play questions of the nature of the network as a medium, an &lsquo;instrument&rsquo;, or a shared virtual acoustic space, as well as the roles of the participants within it. Making music online with near-zero latency calls for a fundamental rethinking of the potential of music technology to transform musical practice as such. In addition to overcoming, to a great extent, the barriers to synchronous collective music-making posed by the pandemic, and offering a space for the development of new repertoires as described above, it also engenders new opportunities for creating community internationally and presenting live music for international audiences. Reducing latency to near zero means that these collective musical practices may include a range of genres, ranging from chamber music from the Western classical repertoire to collective improvisation spanning continents.\\n&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">Networked performance questions the distinction between the processes of writing/composing and improvisation, as the two are often interlinked in telematic performances. More specifically, telematics offers an ideal site for the practice of &lsquo;live composition&rsquo;, which de-hierarcherises the roles and social distributions often present in the structures of New Music practices. Blurring these roles incites rethinking the notion of the author; but not necessarily, however, in the manner of Foucault or of Barthes of seeing the author as the &lsquo;last signifier&rsquo;, which minimises the author&rsquo;s presence, and thus risks further invisibilising underrepresented authors. Rather, by potentially levelling-out the roles of performer, improvisor and composer in the distributed online space, telematics creates a fertile environment for new authorial practices to emerge. Telematic musical performances also bring new reflections to music technology itself, as they call into play questions of the nature of the network as a medium, an &lsquo;instrument&rsquo;, or a shared virtual acoustic space, as well as the roles of the participants within it.</span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;This presentation will examine the recent history of networked performance, in light of the multiple possibilities telematics offers for re-evaluating traditional notions of New Music and other repertoires. This practice-based research presentation will draw on examples from my own experience using JackTrip &ndash; in close collaboration with its developers at CCRMA &ndash; in directing and conducting student ensembles in projects that involve networked performance and co-creativity. Networked performance questions the distinction between the processes of writing/composing and improvisation, as the two are often interlinked in telematic performances. More specifically, telematics offers an ideal site for the practice of &lsquo;live composition&rsquo;, which de-hierarcherises the roles and social distributions often present in the structures of New Music practices. Blurring these roles incites rethinking the notion of the author; but not necessarily, however, in the manner of Foucault or of Barthes of seeing the author as the &lsquo;last signifier&rsquo;, which minimises the author&rsquo;s presence, and thus risks further invisibilising underrepresented authors. Rather, by potentially levelling-out the roles of performer, improvisor and composer in the distributed online space, telematics creates a fertile environment for new authorial practices to emerge. Telematic musical performances also bring new reflections to music technology itself, as they call into play questions of the nature of the network as a medium, an &lsquo;instrument&rsquo;, or a shared virtual acoustic space, as well as the roles of the participants within it. Making music online with near-zero latency calls for a fundamental rethinking of the potential of music technology to transform musical practice as such. In addition to overcoming, to a great extent, the barriers to synchronous collective music-making posed by the pandemic, and offering a space for the development of new repertoires as described above, it also engenders new opportunities for creating community internationally and presenting live music for international audiences. Reducing latency to near zero means that these collective musical practices may include a range of genres, ranging from chamber music from the Western classical repertoire to collective improvisation spanning continents.\\n&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">Making music online with near-zero latency calls for a fundamental rethinking of the potential of music technology to transform musical practice as such. In addition to overcoming, to a great extent, the barriers to synchronous collective music-making posed by the pandemic, and offering a space for the development of new repertoires as described above, it also engenders new opportunities for creating community internationally and presenting live music for international audiences. Reducing latency to near zero means that these collective musical practices may include a range of genres, ranging from chamber music from the Western classical repertoire to collective improvisation spanning continents.<br /></span></p>",
        "topics": [],
        "user": {
            "pk": 24769,
            "forum_user": {
                "id": 24742,
                "user": 24769,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/kretz_photo_florence.jpeg",
                "avatar_url": "/media/cache/32/9c/329c88320f1a66b9f139ae07d4bbffc7.jpg",
                "biography": "Hans Kretz is a conductor, pianist, researcher and author. He holds PhDs in Music and Philosophy from the University of Leeds and the University of Paris 8 Vincennes-Saint-Denis respectively. His research interests include philosophy of culture, aesthetics, philosophical anthropology and philosophy of technology. His writings have appeared in the Recherches d'Esthétique Transculturelle series of L'Harmattan, and in the Cahiers Critiques de Philosophie. He is a Lecturer at Stanford University, where he currently conducts and directs the Stanford New Ensemble.",
                "date_modified": "2025-12-28T14:44:33.622746+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 979,
                        "forum_user": 24742,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "hkretz",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "networked-performance-as-a-space-for-collective-creation",
        "pk": 1337,
        "published": true,
        "publish_date": "2022-09-13T13:03:36+02:00"
    },
    {
        "title": "Long Lasting Men Perfume in Pakistan and Women Perfume in Pakistan Guide",
        "description": "Danieen.com makes it easy to choose Men Perfume in Pakistan and Women Perfume in Pakistan. A good fragrance improves self-confidence. Men Perfume in Pakistan includes refreshing scent profiles. Women Perfume in Pakistan offers elegant and soft notes. Long-lasting perfumes are more reliable. Wearing perfume daily adds freshness. A beautiful scent enhances your personality. Fragrance is an invisible style statement. Danieen.com is a trusted option.",
        "content": "<p>Discover a great range of <a href=\"https://danieen.com/product-category/perfume-for-men\">Men Perfume in Pakistan</a> and Women Perfume in Pakistan at <a href=\"https://danieen.com\">danieen.com</a>. Fragrance plays a big role in personality building. Men Perfume in Pakistan gives a bold impression.<a href=\"https://danieen.com/product-category/perfume-for-women\"> Women Perfume in Pakistan</a> creates a soft and charming aura. Long-lasting perfumes are best for daily use. A pleasant scent improves your mood. Perfume completes your grooming routine. A signature scent adds uniqueness. Danieen.com has options for everyone.</p>\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 160130,
            "forum_user": {
                "id": 159898,
                "user": 160130,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Shine-Crystal-Image-1-350x435_1.jpg",
                "avatar_url": "/media/cache/21/87/218757fa8a8ee7d3302ca817251a053a.jpg",
                "biography": "At danieen.com, Men Perfume in Pakistan and Women Perfume in Pakistan are available in premium quality. Perfume is a daily essential for confidence. Men Perfume in Pakistan suits powerful personalities. Women Perfume in Pakistan matches graceful styles. Long-lasting scents are always preferred. A good fragrance makes your presence noticeable. Perfume is part of modern fashion. A perfect scent builds strong impressions. Shop easily at danieen.com.",
                "date_modified": "2026-02-04T07:52:39.822795+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "danieen",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "long-lasting-men-perfume-in-pakistan-and-women-perfume-in-pakistan-guide",
        "pk": 4310,
        "published": false,
        "publish_date": "2026-02-04T07:55:11.859036+01:00"
    },
    {
        "title": "CAIRO - Creative Augmented Impulse Response Objects by Nadine Schütz, Anthony Gallien, John Burnett, Markus Noisternig, Olivier Warusfel",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/cairo_montage_nadine_schütz.jpg\" alt=\"\" max-width=\"864\" max-height=\"864\" width=\"2272\" height=\"2272\" /></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Nadine Sch&uuml;tz,&nbsp;Anthony Gallien, John Burnett, Markus Noisternig, Olivier Warusfel</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/ns_echora/\" target=\"_blank\">Biographie</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"page\" title=\"Page 2\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>La disponibilité de réseaux de microphones sphériques a permis de démocratiser le recours aux Réponses Impulsionnelles de Salle Spatiales (SRIR) pour caractériser les propriétés acoustiques d&rsquo;une salle. Leur usage reste cependant majoritairement limité à la reproduction réaliste des effets de réverbération tridimensionnels par convolution multicanale (auralisation) dans les domaines de l&rsquo;acoustique architecturale et de l&rsquo;archéoacoustique. Dans le domaine musical, les SRIRS peuvent être utilisées comme effet de réverbération. Cependant, la création musicale requiert de dépasser ce simple objectif et de pouvoir manipuler ces SRIRs de sorte à en révéler, accentuer et façonner les propriétés acousmatiques autant que les propriétés acoustiques et spatiales de la salle ou du site mesuré. Dans ce cadre, il s&rsquo;agit de conférer aux SRIRs un statut d&rsquo;objet structuré doté de différents opérateurs de transformation pour en &ldquo;sculpter&rdquo; les dimensions acousmatiques.&nbsp;</span></p>\r\n<div class=\"page\" title=\"Page 3\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>Les manipulations proposées reposent sur deux approches distinctes. La première travaille dans le domaine signal et consiste en une segmentation temporelle du flux multicanal après décomposition dans le domaine des harmoniques sphériques (HS). Un ensemble d&rsquo;opérations de morphing spatial et (formation de voies, rotation) et de filtrage spectral sont ensuite appliquées sur chaque segment. La seconde approche repose sur une étape d&rsquo;extraction de descripteurs travaillant à plusieurs échelles et basés sur le formalisme d&rsquo;Herglotz. Ces descripteurs sont utilisés comme paramètres de contrôle du processus de synthèse subséquent. Cette seconde méthode ouvre un nouveau champ de création spatiale sonore, en offrant une manière innovante d'appréhender l'espace acoustique comme objet et matériau artistique. Elle confère à l&rsquo;espace acoustique un véritable statut d&rsquo;instrument de musique. La présentation détaille les méthodes de traitement du signal mise en &oelig;uvre, décrit les outils de transformation proposés, et illustre leur application selon différents cas d&rsquo;usage. </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 17607,
            "forum_user": {
                "id": 17604,
                "user": 17607,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sonic_Topologies_1257_b_cutsquare_smallsmall.jpg",
                "avatar_url": "/media/cache/b4/99/b499fa45336c40f5a3857c39a793e3a0.jpg",
                "biography": "Nadine Schütz is a sound artist, architect and composer from Switzerland, based in Paris. She explores the auditory landscape like an environmental interpreter and composes by developing the acoustic qualities and ambiences of a site. Space and place become thus a creative score that informs and directs its own transformation. Her compositions, performances and scenographic sound work have been presented in Zurich, Paris, London, Venice, Naples, New York, Moscow, Tokyo and Kyoto. Within urban development projects, her interventions combine the artistic reading of a site with the concern for augmenting its acoustic comfort and identity. Through an original combination of techniques derived from bio- and psychoacoustics, music, sculpture and landscape architecture, she creates sound installations and acoustic designs that participate tangibly in users' daily experiences. Nadine holds a PhD in landscape acoustics from ETH Zurich, where she installed a new studio for the spatial simulation of sonic landscapes. She teaches at ETH Zurich and Parsons Paris and is currently a guest composer in the Acoustic-and-Cognitive-Spaces and the Perception-and-Sound-Design Teams at IRCAM-STMS.",
                "date_modified": "2024-03-21T11:01:29.312466+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 766,
                        "forum_user": 17604,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "ns_echora",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "cairo-creative-augmented-impulse-response-objects-by-nadine-schutz",
        "pk": 3322,
        "published": true,
        "publish_date": "2025-03-05T17:26:44+01:00"
    },
    {
        "title": "Piano Augmenté",
        "description": "Piano augmenté de 1000 façons",
        "content": "<p><a href=\"https://www.youtube.com/c/JePoesie\">https://www.youtube.com/c/JePoesie</a></p>",
        "topics": [],
        "user": {
            "pk": 29017,
            "forum_user": {
                "id": 28989,
                "user": 29017,
                "first_name": "Annatris",
                "last_name": "Corot",
                "avatar": "https://forum.ircam.fr/media/avatars/jepoesie.jpg",
                "avatar_url": "/media/cache/44/42/4442505b503092a4a0cbf5145507a20b.jpg",
                "biography": "ChatGPT would say about Annatris that is an accomplished musician, having received her training at the Conservatory of Versailles and later at the Paris Conservatory. She plays piano, violin, and flute. Her passion for music transcends genres, seamlessly blending classical and contemporary elements to create captivating compositions.\nBeyond her musical talents, Annatris has made significant contributions to musical innovation. She played a pivotal role in the development of PianoGo, a groundbreaking musical notepad and digital companion designed specifically for pianists. These innovations have opened new horizons for musicians, seamlessly merging musical tradition with technological advantages, thereby expanding the realms of musical creativity.\nSimultaneously, Annatris has pioneered innovative tools for visualizing and spatial rendering of the piano. These original creations empower musicians to explore new dimensions in their musical interpretations, resulting in unforgettable auditory and visual experiences.\nAnnatris shares her passion and talents with a global audience through her YouTube channel, @JePoesie, which boasts 2,800 subscribers and over 1.2 million views. On her cha",
                "date_modified": "2023-10-22T14:48:09.250336+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sergi",
            "first_name": "Annatris",
            "last_name": "Corot",
            "bookmarks": []
        },
        "slug": "piano-augmente",
        "pk": 2154,
        "published": true,
        "publish_date": "2023-03-21T21:01:44.435591+01:00"
    },
    {
        "title": "Tutoriel Modalys n°2 : Bowing Spiderman",
        "description": "Deuxième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p><strong>Ce tutoriel porte sur la fa&ccedil;on d'incliner une corde.</strong></p>\r\n<p>Comme dans le premier tutoriel, je commence dans Modalisp, passe &agrave; OpenMusic et termine avec Max. Parce qu'essayer de reconstruire virtuellement une corde r&eacute;aliste est &agrave; mon avis un peu ennuyeux, j'ai d&eacute;cid&eacute; d'utiliser le tableau des propri&eacute;t&eacute;s des mat&eacute;riaux de la documentation de Modalys pour fabriquer la corde en soie d'araign&eacute;e, tr&egrave;s longue ;-)</p>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/hZtH4uY09A0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6></h6>\r\n<p>Apr&egrave;s avoir r&eacute;alis&eacute; ce tutoriel, un utilisateur a signal&eacute; une possibilit&eacute; (assez &eacute;vidente) dans Max d'&eacute;crire les param&egrave;tres dans l'inspecteur pour chaque objet.</p>\r\n<p></p>\r\n<p style=\"text-align: left;\"><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.&nbsp;</strong></p>",
        "topics": [
            {
                "id": 461,
                "name": "Bow",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 277,
                "name": "Max 8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 133,
                "name": "Sound synthesis and treatment",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 462,
                "name": "String",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n2-bowing-spiderman",
        "pk": 724,
        "published": true,
        "publish_date": "2020-08-04T10:20:06+02:00"
    },
    {
        "title": "Chuchotements Burlesques",
        "description": "Sortie de la résidence de recherche artistique du compositeur iranien Alireza Farhang",
        "content": "<p><iframe width=\"560\" height=\"314\" style=\"display: block; margin-left: auto; margin-right: auto;\" src=\"//www.youtube.com/embed/ciOnEaJSR08?feature=youtu\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p style=\"text-align: center;\"><sub>Extrait de la captation du spectacle CHUCHOTEMENTS BURLESQUES</sub></p>\r\n<p></p>\r\n<p><strong>Une tentative o&ugrave; le geste transcende le mot -&nbsp;</strong><em>D'apr&egrave;s une adaptation libre des po&egrave;mes de Henri Michaux</em></p>\r\n<p style=\"text-align: justify;\"><em><sub>Cr&eacute;ation mondiale Marseille, 18 mai 2019, Festival Les Musique Composition : Alireza Farhang Interpr&eacute;tation : <a href=\"http://www.triokdm.com\">Le trio K/D/M</a> L&rsquo;analyse des textes et des dessins : Bruno Boulzaguet et Alireza Farhang Mise en sc&egrave;ne, conception lumi&egrave;re et com&eacute;dien : Bruno Boulzaguet Conception et fabrication des gants : Thomasine Barnekow R&eacute;alisation en informatique musicale : Jos&eacute; Miguel Fernandez</sub></em></p>\r\n<p style=\"text-align: justify;\"><strong>Commande de GMEM, co-production de CIRM</strong></p>\r\n<p style=\"text-align: justify;\">Dur&eacute;e : <em>25 minutes</em></p>\r\n<ul>\r\n<li style=\"text-align: justify;\">Silence Ombre et lumi&egrave;re</li>\r\n<li style=\"text-align: justify;\">Mouvement Gestes sonores</li>\r\n<li style=\"text-align: justify;\">Nouvelles technologies et une voix</li>\r\n<li style=\"text-align: justify;\">Un com&eacute;dien, un accord&eacute;oniste et deux percussionistes</li>\r\n<li style=\"text-align: justify;\">Les bras gliss&eacute;s dans le cuir d'une paire de gants</li>\r\n<li style=\"text-align: justify;\">Univers graphique</li>\r\n<li style=\"text-align: justify;\">Capteurs de mouvement</li>\r\n<li style=\"text-align: justify;\">Cr&eacute;ateurs de sonorit&eacute;s</li>\r\n<li style=\"text-align: justify;\">Technologie de captation de gestes</li>\r\n</ul>\r\n<p></p>\r\n<p style=\"text-align: justify;\">Selon le po&egrave;te Henri Michaux, les mots, pi&eacute;g&eacute;s dans la pauvret&eacute; et la rigidit&eacute; de leur d&eacute;finition, ne se suffisent plus. De ses voyages le po&egrave;te r&eacute;colte des instruments de musique qui lui sont &eacute;trangers avec lesquels il aime improviser des morceaux dont il ne reste aucune trace. Michaux dessine ses compositions, les traduit en paroles, les r&eacute;dige en chapitres tel un manifeste musical. Ce sont ces textes et images qui ont &eacute;t&eacute; notre mati&egrave;re premi&egrave;re pour naviguer dans son univers litt&eacute;raire, aussi bien verbal que pictural. S&rsquo;agit-il d&rsquo;un concert th&eacute;&acirc;tral ? D&rsquo;une chor&eacute;graphie de sons et de lumi&egrave;res ? Espace-temps vacillant entre r&eacute;alit&eacute; et virtualit&eacute; Le processus de cr&eacute;ation de l&rsquo;&oelig;uvre s&rsquo;est effectu&eacute; de fa&ccedil;on organique. Partition hybride, mat&eacute;riau brut, transdisciplinarit&eacute;. \"<strong>Chuchotements burlesques</strong>\" est une tentative o&ugrave; le geste transcende le mot</p>\r\n<p style=\"text-align: justify;\">Le compositeur <a href=\"https://www.alirezafarhang.com\">Alireza Farhang</a> a &eacute;t&eacute; en R&eacute;sidence de recherche artistique &agrave; l'Ircam au sein de l'&eacute;quipe ISMM (Interaction Son Musique Mouvement ).</p>\r\n<p style=\"text-align: justify;\">Son projet s'intitulait :</p>\r\n<p style=\"text-align: justify;\"><strong>Traces de l&rsquo;expressivit&eacute; : partition de flux de donn&eacute;es gestuelles pour les &oelig;uvres interdisciplinaires</strong><br />En collaboration avec les &eacute;quipes <a href=\"https://www.ircam.fr/recherche/equipes-recherche/repmus/\">Repr&eacute;sentations musicales</a> et <a href=\"https://www.ircam.fr/recherche/equipes-recherche/issm/\">Interaction Son Musique Mouvement</a> de l&rsquo;Ircam-STMS.</p>\r\n<p style=\"text-align: justify;\">Dans les &oelig;uvres multidisciplinaires bas&eacute;es sur la musique, l&rsquo;importance de la communication entre les artistes de disciplines diff&eacute;rentes a conduit le compositeur &agrave; concevoir une partition universelle de haut niveau. Cette partition hybride consiste en une partition graphique et une partition de flux de donn&eacute;es gestuelles. Cette derni&egrave;re, l&rsquo;objet de la r&eacute;sidence en recherche musicale et artistique de l&rsquo;Ircam, vise &agrave; fournir sous forme de donn&eacute;es informatiques, le rendu des gestes sonores &eacute;lectroniques et instrumentaux. Les gestes physiques des protagonistes sont eux aussi traduits et formalis&eacute;s informatiquement via la partition de flux de donn&eacute;es.</p>\r\n<p style=\"text-align: justify;\">L&rsquo;&eacute;tude porte sur la probl&eacute;matique de la s&eacute;miologie dans le cadre des recherches th&eacute;oriques, les travaux pratiques et des logiciels existants, le travail &eacute;tait focalis&eacute; sur la formalisation d&rsquo;une technique et le d&eacute;veloppement d&rsquo;une technologie qui permettent de faire la premi&egrave;re tentative de r&eacute;aliser une interface de la partition hybride de haut niveau.&nbsp;</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<h6 style=\"text-align: justify;\">Retrouvez le prochain appel &agrave; r&eacute;sidence de recherche artistique en septembre 2019 sur la plateforme <a href=\"https://www.ulysses-network.eu\">Ulysse</a>&nbsp;et&nbsp;le site du <a href=\"https://forum.ircam.fr\">forum de l'IRCAM</a>.&nbsp;</h6>\r\n<h6 style=\"text-align: justify;\">Le programme de r&eacute;sidence en recherche artistique de l'Ircam offre aux artistes de toutes disciplines, la possibilit&eacute; de collaborer avec une ou plusieurs &eacute;quipes de recherche de l&rsquo;Ircam, dans le cadre d&rsquo;une r&eacute;sidence pouvant se prolonger dans un centre de cr&eacute;ation partenaire.</h6>\r\n<h6 style=\"text-align: justify;\">Les b&eacute;n&eacute;fices de la r&eacute;sidence pour les artistes sont multiples : disposer d&rsquo;un temps de r&eacute;flexion sur leur pratique ; travailler au contact des chercheurs pour approfondir avec eux une piste de recherche artistique ; &eacute;laborer ou perfectionner un outil de cr&eacute;ation innovant ; mener une recherche artistique exp&eacute;rimentale ; produire le prototype d&rsquo;un artefact ; composer l&rsquo;esquisse d&rsquo;une pi&egrave;ce ou d&rsquo;une performance ; tester une configuration, un dispositif audio/vid&eacute;o immersif 360 degr&eacute;s tel que la Satosph&egrave;re de la SAT ou le Klangdom du ZKM.</h6>\r\n<h6 style=\"text-align: justify;\">Un panel international d'experts &eacute;value chacun des dossiers de candidature. L'&eacute;valuation est bas&eacute;e sur l&rsquo;originalit&eacute; du projet et son caract&egrave;re innovant, les aspects collaboratifs, l'exp&eacute;rience et la capacit&eacute; &agrave; entreprendre le projet propos&eacute;.</h6>\r\n<h6 style=\"text-align: justify;\">&Agrave; l&rsquo;issue de la s&eacute;lection finale par un jury international, les laur&eacute;ats de l&rsquo;appel &agrave; r&eacute;sidence 2020-2021 du programme de r&eacute;sidences en recherche artistique seront annonc&eacute;s, lors des Ateliers du forum, &agrave; Paris, en Mars 2020.</h6>\r\n<h6 style=\"text-align: justify;\">Chaque laur&eacute;at se voit accorder une r&eacute;sidence &agrave; l'Ircam, au sein de l&rsquo;&eacute;quipe-projet d&rsquo;accueil sollicit&eacute;e, &eacute;ventuellement suivie par une p&eacute;riode de co-r&eacute;sidence, pendant une p&eacute;riode totale d&eacute;termin&eacute;e (comprise entre 2 &agrave; 6 mois), s&rsquo;&eacute;talant d'avril 2020 &agrave; d&eacute;cembre 2021.&nbsp;</h6>",
        "topics": [
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 167,
                "name": "Mouvement",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 168,
                "name": "Parole",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 170,
                "name": "Partition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 171,
                "name": "Transdisciplinarité",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 6,
            "forum_user": {
                "id": 6,
                "user": 6,
                "first_name": "Paola",
                "last_name": "Palumbo",
                "avatar": "https://forum.ircam.fr/media/avatars/_DSC8129.jpeg",
                "avatar_url": "/media/cache/fc/4e/fc4eec9cd07d03302b5a8091cf755eb4.jpg",
                "biography": "Paola Palumbo is the events and marketing Manager of Forum Ircam.\nThe Forum Ircam is the community of users of Ircam software that include the platform forum.ircam.fr  and Forum workshop where converge artists and scientists of all around the world.\nFrom 2011 to 2017 she is also coordinator of Research and Creativity Interfaces Department and follow artists in the IRCAM Musical Research Residency Program. \nShe is co-founder of Ircam Live electro concerts (2011-2015) and Forum Hors les Murs events (Seoul 2014, Buenos Aires, Sao Paulo 2015, Taiwan 2016, Santiago de chile 2017, Shanghai 2019, Montreal 2021). \nShe is in charge of several international partnership with universities and cultural organisations.\n\nShe collaborated with several Festival as Image Sonore and Les Vieilles Charrues in charge of program and partnership .\n\nPreviously she received a Master Degree in Public Politics and Social change \"Cultural Project Management\" at the University Pierre Mendès France (UPMF), Institut d’Etudes Politiques (IEP), Observatoire des Politiques Culturelles (OPC), Grenoble, France and a Master Degree in Political Science, University « La Sapienza » Roma, Italy.",
                "date_modified": "2026-03-03T17:50:06.221851+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 424,
                        "forum_user": 6,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [
                            {
                                "id": 343,
                                "membership": 424
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "palumbo",
            "first_name": "Paola",
            "last_name": "Palumbo",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 277,
                    "user": 6,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "chuchotements-burlesques",
        "pk": 238,
        "published": true,
        "publish_date": "2019-08-29T15:21:50+02:00"
    },
    {
        "title": "ASAP - Creative Exploration and Transformation of Sound (Workshop) by Pierre Guillot",
        "description": "",
        "content": "<p><span>Through practice and concrete examples, participants will know how to use the ASAP tools for transforming sound: cross-synthesis, pitch transposition, time stretching, spectral filtering and spectral remix. He will present the functionalities offered by the ASAP collection, and in particular the plug-ins based on ARA2&nbsp;technology. The Psycho Filter plug-in lets you draw shape filters on the sound spectrogram and control their gain and fade. The sound representation and user interface enable you to create highly complex and precise surface filters to reduce or enhance specific parts of the sound's spectral components, to compensate for annoying artifacts in the sound, to isolate certain specificities of the sound and to creatively transform the sound. The Pitches Brew plugin lets you transpose the pitch and formant of sounds by drawing and modifying their frequency curves. Beyond the exceptional quality of the processing, the plugin offers a visual representation of the original fundamental frequencies, expected pitches, and formants with curves enabling numerous original edits such as redrawing, transposing, stretching, copying, etc.</span></p>\r\n<p><span>In this talk, Pierre Guillot will give a brief introduction to the historical heritage and artistic and research context in which ASAP is developed, highlighting the challenges and innovative nature of the project. We will then present the possibilities offered by this suite of tools and discuss the prospects for further developments and improvements.&nbsp;ASAP is a set of audio plug-ins that allows creatively transforming sound. You are invited to play with the sound representation and the synthesis parameters to generate new sounds. The plug-ins can also be used to correct the defaults of the sound and to improve audio rendering. Thanks to the ARA2 integration, the spectral transformations are integrated into your editing workflow.</span></p>\r\n<p><span><span>More info :&nbsp;</span><a href=\"https://forum.ircam.fr/projects/detail/asap/\">https://forum.ircam.fr/projects/detail/asap</a></span></p>\r\n<p><img src=\"/media/uploads/image_asap.png\" alt=\"\" width=\"878\" height=\"494\" /></p>",
        "topics": [],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "asap-creative-exploration-and-transformation-of-sound-workshop-by-pierre-guillot",
        "pk": 3077,
        "published": true,
        "publish_date": "2024-10-25T11:19:31+02:00"
    },
    {
        "title": "SDI (Spatially distributed instrument) by Jan Hennig",
        "description": "Spatialising musical performances using rule-based systems for addressing multiple instances of identical sound generating devices.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Jan Ove Hennig</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/kabuki/\" target=\"_blank\">Biography</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p>Traditionally, the location from which a note emanates is inherently connected to the position of the instrument it was produced by. For example, the timbre of the violoncello in a classical orchestra is married to its defined position in the sound stage to the right of the conductor. With the piano, notes played in the lower register are produced to the left of the player, and higher notes further to the right. When it comes to determining the position of a sound that is fundamentally position-less (such as a digitally generated waveform), spatialization is often considered an additional aspect of the performative process. This is further reinforced by the lack of standardized interfaces for providing control over spatial aspects on the same intuitive level that traditional musical instruments offer over pitch, amplitude, duration and timbre.</p>\r\n<p>As an alternative, \"SDI\" is distributing the instrument in space by installing identical instances of the same sound-generating device, and then using pre-determined rules set by the performer that address these devices in real time. This stands in contrast with the established practice of controlling the levels of specific sound sources played back through a system of speakers, while the rule-based approach also sets it apart from stochastic processes.</p>\r\n<p>Electrodynamic exciters are used to turn the vibrating objects into representations of the instrument itself. Through the use of identifiably sources from which the sound emanates, the spatial aspect of artistic performance can be appreciated on an intuitive level by the audience. Selecting and modifying the objects that vibrate become integral aspects of the performance, contrary to conventional spatialisiation practices in electro-acoustic music where the loudspeakers are required to precisely reproduce the intent of the composer or performer without coloring the sound.</p>\r\n<p>On a technical level this is realized with the help of compact devices built around a Raspberry Pi running a RNBO patch. They are being addressed by Max/MSP over UDP making the communication between the performance interface and the sound generators reliable and nearly instant. In contrast to existing multi-speaker configurations where the sound is generated by a central instrument or playback device and then reproduced in different positions of the physical space, SDI only sends messages to the individual instances of the instrument, where they are then converted into sound.</p>\r\n<p>In summary, SDI puts the performer in a position to control multiple destinations from a single point of origin without having to assign spatial position as a separate parameter, opening new doors for programmatic ways of integrating real-time spatialisation into performances.<br /><img alt=\"Components\" src=\"https://forum.ircam.fr/media/uploads/user/b6da9712cb096c2d8372f4bbd26b5da0.jpg\" /></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2350,
                "name": "Raspberry Pi",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2349,
                "name": "RNBO",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 59124,
            "forum_user": {
                "id": 59059,
                "user": 59124,
                "first_name": "Jan Ove",
                "last_name": "Hennig",
                "avatar": "https://forum.ircam.fr/media/avatars/Kabuki_Portrait_-_Processed.jpg",
                "avatar_url": "/media/cache/d0/7f/d07f990b002b5d863a5794680b842936.jpg",
                "biography": "I'm a sound artist and music producer based in Frankfurt, Germany with a passion for sharing knowledge. I've worked as lecturer at the Abbey Road Institute in Frankfurt (with focus on Max/MSP and sound synthesis) and developed video series for Softube (Modular Sound Explorations) and Korg (Sequencing Strategies) among others. In addition to releasing music and performing live with my modular synthesizer I'm also exhibiting large-format audio installations based around my interests in 3d printing, microcontrollers and their interactions with sensors and physical objects.",
                "date_modified": "2025-12-08T20:39:01.777661+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 965,
                        "forum_user": 59059,
                        "date_start": "2024-10-17",
                        "date_end": "2025-10-17",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "kabuki",
            "first_name": "Jan Ove",
            "last_name": "Hennig",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2759,
                    "user": 59124,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "sdi-spatially-distributed-instrument",
        "pk": 3079,
        "published": true,
        "publish_date": "2024-10-26T01:58:13+02:00"
    },
    {
        "title": "Test ",
        "description": "Test",
        "content": "<p>test</p>",
        "topics": [],
        "user": {
            "pk": 131883,
            "forum_user": {
                "id": 131708,
                "user": 131883,
                "first_name": "Grenoble",
                "last_name": "Bolvan",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f90c105a779ccdc4c3852c1f422ffe6c?s=120&d=retro",
                "biography": "Bonjour a vous",
                "date_modified": "2025-09-04T18:44:38.803222+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "grenoblebolvan",
            "first_name": "Grenoble",
            "last_name": "Bolvan",
            "bookmarks": []
        },
        "slug": "test-5",
        "pk": 3673,
        "published": false,
        "publish_date": "2025-09-04T18:45:23.936900+02:00"
    },
    {
        "title": "Textural Exploration in Bufferless Granular Synthesis",
        "description": "Approach to developing a granular system has followed a different path, omitting the use of a buffer. Instead, I have focused on generating grains from the audio signal in real time.",
        "content": "<p>In the field of contemporary music and live electronics, granular synthesis has played a crucial role in the manipulation and transformation of sound. Traditionally, this technique fragments a sound into small pieces called grains, which are manipulated and rearranged over time to create complex sound textures. This process generally relies on a buffer to store the audio and allow the fragmentation and manipulation of the grains in real time. However, my approach to developing a granular system has followed a different path, omitting the use of a buffer. Instead, I have focused on generating grains from the audio signal in real time.</p>\n<p>&nbsp;</p>\n<p>The inspiration behind this approach comes directly from Barry Truax's &lsquo;Riverrun&rsquo; (1986), one of the first pieces to employ real-time granular synthesis. Truax used the PDP-11 computer, one of the first widely available minicomputers in the 1970s and 1980s, known for its ability to process data in real time and handle interactive tasks. Through this system, he designed a process that stored sound fragments in buffers, which were then manipulated through granular synthesis to generate thousands of tiny &lsquo;grains&rsquo; of audio per second.</p>\n<p>&nbsp;</p>\n<p>Instead of using a buffer to store audio fragments, my system relies on direct, real-time manipulation of the sound through Max/MSP, exploring how small variations in the signal can generate changing, dynamic textures without the need for pre-stored recordings. This approach has allowed me to take granular synthesis into unusual territories, where each performance is unique.</p>\n<p>&nbsp;</p>\n<p>The next step in this process was to integrate this system into my compositions, where live electronics and the interaction with the sound generated by my instrument, the tuba, play a central role, but this procedure can be applied to any audio signal. This combination of techniques has allowed me to create dense and enveloping soundscapes, where every note and every gesture of the performer affects the way the grains of sound unfold and transform. Where technology and music coexist in a shared space of constant evolution.</p>\n<p>&nbsp;</p>\n<p>Watch the video below:</p>\n<p><a href=\"https://youtube.com/shorts/v4lppgo_doA\" title=\"Sample\">Granular Synthesis with Tuba Input</a></p>",
        "topics": [
            {
                "id": 70,
                "name": "Audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 288,
                "name": "développeur",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 79,
                "name": "Max8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 265,
                "name": "Sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1803,
                "name": "synthese granulaire ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 84820,
            "forum_user": {
                "id": 84719,
                "user": 84820,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/fabianindex3.jpeg",
                "avatar_url": "/media/cache/1c/be/1cbef26fb5d678be8359a9d30eebb230.jpg",
                "biography": "Tuba Player + Live Electronics Performer\nSound Artist & Researcher\n\nDeveloping a special interest in sound, its analysis, synthesis and manipulation, his work is always REACTIVE. In his work, live electronics achieve reactivity through the analysis of the sounds generated from his instrument, the tuba, and the movement of his body, creating in parallel visual material that is transformed by the sound impulses it receives.",
                "date_modified": "2025-05-17T03:38:29.108525+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fabiancm",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3090,
                    "user": 84820,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "textural-exploration-in-bufferless-granular-synthesis",
        "pk": 3090,
        "published": true,
        "publish_date": "2024-11-04T00:44:06.710936+01:00"
    },
    {
        "title": "Segmentation of sound by silence-not-silence principle",
        "description": "How to get lists of fragments with silence-not-silence time markers with the ability to control the tail length, something like margin when going from sound to silence?",
        "content": "<p>Hi! I'm using Max and recently I discovered MuBu.</p>\n<p>I'm pretty sure that with MuBu it's possible to get lists of fragments with silence-not-silence time markers with the ability to control the tail length, something like margin when going from sound to silence.</p>\n<p><img src=\"https://cycling74-web-uploads.s3.amazonaws.com/63e372b31c20792f47527d2f/2024-10-12T17:39:17Z/2024-10-12_22-27-53.png\"></p>\n<p>Like this:</p>\n<p>List:<br>Not Silence: start point: 0.00, end point 15.00;<br>&nbsp; &nbsp; &nbsp;Silence: start point: 15.00, end point 25.00;<br>... And so on and so forth until the end of the audio length.</p>\n<p>Maybe someone can help me with this, or give me a hint?</p>",
        "topics": [
            {
                "id": 207,
                "name": "Ableton",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 486,
                "name": "Msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 61,
                "name": "Mubu",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 67015,
            "forum_user": {
                "id": 66945,
                "user": 67015,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7896f59635845eeed58b9e47faca0974?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-01-08T20:08:44.281856+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "justevanproducer",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "segmentation-of-sound-by-silence-not-silence-principle",
        "pk": 3202,
        "published": false,
        "publish_date": "2025-01-08T19:57:51.873181+01:00"
    },
    {
        "title": "IRCAM: A Virtual Visit",
        "description": "A keynote by Grégoire Lorieux, 25 Sept. 2025, Riga (Latvia)",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p>This keynote about IRCAM (Institut de Recherche et Coordination Acoustique/Musique) offers participants a unique opportunity to discover one of the world&rsquo;s leading centers for contemporary music research and sound innovation. From its subterranean labs beneath the Centre Pompidou in Paris to its experimental performance spaces, IRCAM has been at the forefront of electronic music, acoustic science, and creative technology since 1977. &nbsp;From real-time audio processing to AI-assisted composition and spatialization, IRCAM brings together composers, scientists, and engineers to explore the future of musical creation.</p>\r\n<p><img src=\"/media/uploads/exterieur_ircam_03_1200.jpg\" alt=\"\" width=\"660\" height=\"487\" />&nbsp; &nbsp;<img src=\"/media/uploads/image_studio_avec_chercheurs_ircam.jpeg\" alt=\"\" width=\"729\" height=\"486\" /></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [],
        "user": {
            "pk": 3044,
            "forum_user": {
                "id": 3042,
                "user": 3044,
                "first_name": "Gregoire",
                "last_name": "Lorieux",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/cd7913e7acfc03b53fbc5d9c30da67ce?s=120&d=retro",
                "biography": "Grégoire Lorieux is a composer, artistic director, and computer music designer, teaching at IRCAM. After studying early music and completing a master’s thesis on Kaija Saariaho, he studied composition with Philippe Leroux and at the Conservatoire de Paris, while joining IRCAM as a technology professor. In 2012, he took part in SPEAP at Sciences Po Paris with Bruno Latour, exploring connections between art, ecology, and social engagement. Active in education, he has led numerous projects combining creation and cultural outreach, such as IRCAM’s Ateliers de la Création and Paysages Composés with Ensemble Ars Nova and Quatuor Diotima. From 2013 to 2024, he was co-director of Ensemble Itinéraire. He taught electroacoustic composition at the Paris Conservatoire from 2019 to 2024. His musical language integrates electronics and French spectralism, exploring various formats from installations to concert works. In 2022, he founded Mondes Sonores, an open-air festival linking music and ecology.",
                "date_modified": "2026-02-27T15:38:40.219400+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 354,
                        "forum_user": 3042,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 25,
                                "membership": 354
                            },
                            {
                                "id": 599,
                                "membership": 354
                            },
                            {
                                "id": 655,
                                "membership": 354
                            },
                            {
                                "id": 781,
                                "membership": 354
                            },
                            {
                                "id": 917,
                                "membership": 354
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "lorieux",
            "first_name": "Gregoire",
            "last_name": "Lorieux",
            "bookmarks": []
        },
        "slug": "ircam-a-virtual-visit",
        "pk": 3558,
        "published": true,
        "publish_date": "2025-07-17T11:18:05+02:00"
    },
    {
        "title": "Tutorial: Training RAVE models on custom data",
        "description": "Learn to train RAVE models on custom data.",
        "content": "<h1>Video Tutorial</h1>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/MlbkSMLoWBk\"></iframe></p>\r\n<p></p>\r\n<p>Are you intending to train your own RAVE model on a dedicated machine with SSH? This tutorial is made for you!</p>\r\n<p>In this article, we will explain how to</p>\r\n<ol>\r\n<li>install RAVE on a computer / remote server</li>\r\n<li>choose &amp; preprocess a dataset</li>\r\n<li>choose the good configuration</li>\r\n<li>monitor the training</li>\r\n<li>export the model for RAVE VST / nn~.</li>\r\n</ol>\r\n<p><strong>Warning</strong>: this tutorial does not explain how to train RAVE on unofficial Google Colab notebooks, like <a href=\"https://colab.research.google.com/drive/1ih-gv1iHEZNuGhHPvCHrleLNXvooQMvI?usp=sharing\">this one by Mois&eacute;s Horta</a>.</p>\r\n<h1>Preparing the training</h1>\r\n<h3>Prerequisites</h3>\r\n<p>What you need to train a model is :</p>\r\n<ul>\r\n<li>a computer / server with a GPU <strong>of at least 8GB</strong> (5Gb for raspberry) <em>for the full time of training</em>, with SSH access. You can check the minimum GPU memory required for a given RAVE architecture in the <a href=\"https://github.com/acids-ircam/RAVE/blob/master/README.md#training\">README</a>.</li>\r\n<li>a dataset of at least <strong>one hour</strong>. If you don't have, go fetch one on websites like <a href=\"https://www.kaggle.com/\">Kaggle</a>, or generate synthetic data with frameworks like <a href=\"http://ajaxsoundstudio.com/software/pyo/\">pyo</a>.</li>\r\n</ul>\r\n<p><strong>Warning</strong>: The duration of a full RAVE training is difficult to hard to predict exactly, as it depends of the chosen configuration, the data, and your machine. Usually, first training phase last for about three or four days, and second phase may take from four days to three weeks.</p>\r\n<h3>Install Python through miniconda</h3>\r\n<p>Here, will see how to install all the required dependencies needed on an empty Linux server, such as the one you can rent online on platforms such as <a href=\"https://vast.ai/\">vast.ai</a>. If you want to train it on your own computer or server, the installation is the same ; however, you may encounter some specific problems because of the current internal state of your machine (dependencies, bash profiles, etc...). In case of problems, do not hesitate to make a clean pass on your machine !</p>\r\n<p><strong>Note 1 :</strong> On Windows, you will have to install <a href=\"https://gitforwindows.org/\">GitBash</a> as a terminal interface to follow this tutorial. &lt;br/&gt; <strong>Note 2 :</strong> Some online GPU servers propose to start from Docker images with PyTorch installed, such that you may not need to follow some of the steps below.</p>\r\n<p>In this tutorial we will use <code>miniconda</code> as a python package manager, that install a minimal framework allow to safely create Python environements. <a href=\"https://docs.anaconda.com/free/miniconda/index.html\">Download miniconda on the webpage</a>, and install it on your platform.</p>\r\n<pre><code># prepare folder for miniconda at root folder (called ~)\r\nmkdir -p ~/miniconda3\r\n# Download miniconda (for Linux)\r\nwget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh\r\n# Download miniconda (for Windows)\r\nwget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh\r\n# install miniconda\r\nbash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3\r\nrm -rf ~/miniconda3/miniconda.sh\r\n# init miniconda\r\n~/miniconda3/bin/conda init bash\r\n~/miniconda3/bin/conda init zsh\r\n# close and start again your terminal, or launch the following command\r\ncd ~\r\nsource .bashrc\r\n</code></pre>\r\n<p><code>miniconda</code> should be installed now ; you can verify it by seeing a <code>(base)</code> indication at the very left of your command prompt, as in the image below.</p>\r\n<h3>Install Python environment</h3>\r\n<p><code>miniconda</code> is used to create Python environement, that are very useful to make sure that a given application won't messup the requirements of the others. For example, you can check the current location of your python envionrment :</p>\r\n<pre><code>which python \r\n# should return sth like YOUR_MINICONDA_PATH/bin/python\r\n</code></pre>\r\n<p>that is, when the <code>(base)</code> environment is activated, it will use the default <code>python</code> executable of miniconda. We will now create an environment specific to RAVE :</p>\r\n<pre><code># create environment\r\nconda create -n RAVE python=3.9\r\n# activate environment\r\nconda activate RAVE \r\nwhich python \r\n# should return sth like YOUR_MINICONDA_PATH/envs/RAVE/bin/python\r\n</code></pre>\r\n<p>If your RAVE environment has been activated properly, you should see a <code>(RAVE)</code> indication instead of <code>(base)</code> at the very left of your command prompt. By using <code>which</code>, we can see the <code>python</code> executable of our environement has changed : our dependencies are then isolated from other applications, and will not mess around!</p>\r\n<p>&lt;img src=\"assets/article_3/environment_ok.png\"/&gt;</p>\r\n<p>Now we created our environement, we will then download the RAVE git repository, and install RAVE and its required dependencies :</p>\r\n<pre><code>which pip\r\n# should return sth like YOUR_MINICONDA_PATH/envs/RAVE/bin/pip\r\npip install acids-rave\r\n</code></pre>\r\n<p>we download the RAVE repository with <code>git</code>, and install the requirements with python package manager <code>pip</code> (we check it's the one of our environment before). The installation of the RAVE requirements may take some time but will be finised after that, so it's the good moment to take care of the dataset.</p>\r\n<h3>Data preparation</h3>\r\n<p>The training dataset needs to be preprocessed before being usable for a RAVE training. It is difficult to elaborate precise guidelines on what could be a good dataset or not, but some criteria are mandatory for the model to be trained accurately :</p>\r\n<ul>\r\n<li><strong>amount of data</strong> - the more data you have, the more the model will be able to understand the underlying properties of your dataset.</li>\r\n<li><strong>homogeneity</strong> - if your dataset comprises very different types of sounds, the model may struggle to learn everything and to gather everything ; however, if the dataset is too similar, the model may fall into a low-capacity behavior with a very low variety. For example, a dataset with a single instrument with various playing styles is usually a good choice, as it is both various and constrained in the kind of sounds it gathers. Similarly, training on music samples of a given musical genre will provide enough diversity to the model to generalize well, but also to find common variations that it will be able to generate during inference.</li>\r\n<li><strong>audio quality</strong> - of course, if your sounds have a low audio quality, or very noisy, this will make the training procedure more difficult. You should also consider the overall dynamics of your dataset : if some sounds are much louder than others, it will generally be learned in a better way that sounds with a lower amplitude. If needed, you may consider to make a compression / normalization pass on your overall dataset, in order to help the model to generate these sounds more accurately.</li>\r\n</ul>\r\n<p>Here, we will use the <a href=\"https://www.kaggle.com/datasets/imsparsh/musicnet-dataset\">musicnet</a> dataset, contaning 330 freely-licensed classical music recordings. All the audio sounds must be placed in a specific folder, and be pre-processed with the <code>rave preprocess</code> command :</p>\r\n<pre><code>rave preprocess --input_path /path/to/your/dataset --output_path /target/path/of/preprocessed/files --channels 1\r\n</code></pre>\r\n<p>We can indicate the number of channels with the <code>--channels</code> keyword :</p>\r\n<ul>\r\n<li>if we want our model to generate mono signals, write <code>--channels 1</code></li>\r\n<li>if we desired a stereo model, write <code>--channels 2</code></li>\r\n<li>for a quadriphonic model <code>--channels 4.</code></li>\r\n<li>...</li>\r\n</ul>\r\n<p>However, only mono models are compatible with RAVE VST, so we will train a monophonic models with <code>--channels 1</code>. Now, let&rsquo;s launch this command and wait for the preprocessing to be done. Once preprocessing has been finished, there should be two files in the output directory of your preprocessing :</p>\r\n<pre><code>cd /target/path/of/preprocessed/files\r\nls\r\n# &gt; data.mdb      metadata.yaml\r\ncat metadata.yaml\r\ndu -sh data.mdb\r\n</code></pre>\r\n<p>the <code>data.mdb</code> file contains the compressed data that was just pre-processed, and <code>metadata.yaml</code> contains some information about your dataset.</p>\r\n<p><strong>Pro-tip</strong>: you can add the <code>--lazy</code> mode data preprocessing, that uses <code>ffmpeg</code> in real-time during tranining. This avoids data duplication in an other folder, but also slows down data import during training. Use it if you have a really really big dataset!</p>\r\n<h1>Training your model</h1>\r\n<p>RAVE has been installed, and the data is ready. We are now able to start our training!</p>\r\n<h3>Detaching your process</h3>\r\n<p>Before starting your training, we will have to make sure that our training does not stop when will we close the terminal window. Indeed, when you launch a command on the terminal, the process is said to be <em>attached</em> to your command window : it implies that when you close the window, you close the process. We will then have to <em>detach</em> the process from the window ; on Linux, we will here use <code>screen</code>, but you could also use <code>tmux</code> or other.</p>\r\n<pre><code># If your machine is fresh, you could not have it installed by default\r\nsudo apt install screen\r\n# We make a screen called train_musicnet\r\nscreen -S train_musicnet\r\n</code></pre>\r\n<p>You should see a cleaned window ; you can think of it as a tab in a browser, that you activated in parallel to the window you were before. With <code>screen</code>, you can detach the current window with Ctrl+A, then D : you should go back to the previous window. Now, if you close the terminal window and go back, all the commands you would have launched in the <code>train_musicnet</code> window would not be affected!</p>\r\n<p>You can see the list of screen with the <code>-ls</code> keyword ; let's go back in the screen where we will start the training, and activate our <code>RAVE</code> environment.</p>\r\n<pre><code># display possible screens\r\nscreen -ls\r\n# re-attach train_musicnet\r\nscreen -r train_musicnet\r\nconda activate RAVE\r\n</code></pre>\r\n<h3>Training the model</h3>\r\n<p>Now is the big time : we will start the training process! There are a lot of possible options, that you can display by using the <code>--help</code> keyword, that will list all possible keywords for the <code>train</code> command. For this training we will launch the following command, that we will describe keyword by keyword.</p>\r\n<pre><code>rave train --help\r\n# [...] every train keyword [...]\r\nrave train --name musicnet --db_path /path/to/dataset --out_path /path/to/model/out --config v3 --config noise --augment mute --augment compress --augment gain --save_every_epoch 100000\r\n</code></pre>\r\n<p>where you have to replace <code>/path/to/dataset</code> by your dataset path, and <code>/path/to/model/out</code> by the directory where you want your model to be saved in. Here is the description of the keywords we chose :</p>\r\n<ul>\r\n<li><code>--name</code> is the name of your training ; you can choose it arbitrarily</li>\r\n<li><code>--db_path</code> is the directory of your preprocessed data</li>\r\n<li><code>--out_path</code>is the output directory of for you model and the training monitoring</li>\r\n<li><code>--config</code> is specifying the model configuration, by default <code>v2</code>. You can add additional configurations by putting more than one <code>--config</code> keyword ; more precisions below.</li>\r\n<li><code>--augment</code> is adding data augmentations to the model ; more precisions below.</li>\r\n<li><code>--save_every_epoch</code> saves the trained model every X epochs ; handy to resume later specific checkpoints of the model.</li>\r\n</ul>\r\n<p><strong>Choosing the architecture</strong>. The choice of the model architecture is the most important decision when training a model. The main configuration are the one listed under the <em>Architecture</em> type of the <a href=\"https://github.com/acids-ircam/RAVE?tab=readme-ov-file#training\">config table in RAVE official README</a>, that we show again just below :</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a32f496c42dc951b0f9b0939cba28cba.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>While this choice is arbitrary, here some tips to select wisely the configuration you need.</p>\r\n<ul>\r\n<li>if your dataset is composed of simple, or short sounds such as instrument samples, single voices, or sound fx, and you want the model to perform timbre transfer, we advise you to use either <code>v1</code> or <code>v2_small</code>. These configurations are lighter, and more suitable for sounds that are not very complex.</li>\r\n<li>if your dataset is composed of layered music, or complex sounds, we advise you to use either <code>v2</code>, <code>v3</code>, or <code>discrete</code> configuration. If you want to use RAVE as a synthesizer by directly controlling the latent varaibles, do not use the <code>discrete</code> configuration ; however, if you plan to train a prior using <code>msprior</code>, use the discrete configuration.</li>\r\n<li>if you want to use your model on a raspberry pi, select the <code>raspberry</code> configuration. This configuration is not able to learn complex sounds though, so choose your dataset wisely.</li>\r\n</ul>\r\n<p><strong>Regularization options.</strong> With <code>v2</code> configuration, some additional regularization options are available. Regularization has an impact on how the model build its latent space, thus on how the sounds are organized in the latent parameters. Regularization strategies can also affect the output quality. Quoting again the <a href=\"https://github.com/acids-ircam/RAVE?tab=readme-ov-file#training\">config table in RAVE official README</a> :</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/48429bf077033121f6c9906a5fb9fac6.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<ul>\r\n<li><em>default</em> is the classic regularization term used for variational auto-encoders, used by default. Use this one before any further experiment!</li>\r\n<li><em>wasserstein</em> is a regularization term inspired by optimal transport ; it may provide better reconstruction results, at the price of a more messy latent space (no smoothness in latent exploration, for example)</li>\r\n<li><em>spherical</em> enforces the latent space to be distributed on a sphere. It is experimental, do not try that first!</li>\r\n</ul>\r\n<p><strong>Additional options</strong>. Last but not least, some very important options are availble in the <em>Others</em> section of the <a href=\"https://github.com/acids-ircam/RAVE?tab=readme-ov-file#training\">config table in RAVE official README</a> :&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b9902ac778e6b53b0781c2e2b4019797.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<ul>\r\n<li><em>causal</em>: enforces the model to only use the past samples of the incoming waveform. In real-time setups, this will reduce the perceived latency of the model ; however, reconstruction quality will be lower.</li>\r\n<li><em>noise</em>: adds a noise synthesizer to the RAVE's decoder ; may be important for learning of sounds with important noisy components.</li>\r\n<li><em>hybrid</em>: replaces the input of the encoder by a mel representation of the incoming signal ; may be interesting for learning on voice.</li>\r\n</ul>\r\n<p><strong>Choosing augmentations.</strong> The <code>--augment</code> keyword can be used to add data augmentations in your training process, that can be very important in case of small datasets. Data augmentations perform randomly signal operations on the data at training data, allowing to virtually increase the diversity of the dataset. Three data augmentations are so far available:</p>\r\n<ul>\r\n<li><code>mute</code> allows to randomly silence out incoming batches, enforcing your model to learn silence if your dataset does not contain any</li>\r\n<li><code>compress</code> randomly applies small amount of compression, allowing the model to be trained on dynamical modifications of your sounds</li>\r\n<li><code>gain</code> randomly applies gain (by default between -6 and 3 dB) to the incoming data, allowing the model to be trained on different amplitudes of your sounds</li>\r\n</ul>\r\n<p>Do not hesitate to try them out, especially if your dataset is small!</p>\r\n<p><strong>Lanching the training</strong>. Once you chose your training configuration, everything is setup! Just launch your command and, once the training status bar appears, you're all good! You can detach the process, and start monitoring your training.</p>\r\n<p><em>Note:</em> If your training fails at some point for some reason, you can resume the last saved state of your training by using the <code>-ckpt</code> checkpoint :</p>\r\n<pre><code>rave train [...your previous training args...] --ckpt /path/to/model/out\r\n</code></pre>\r\n<p>and RAVE will automatically detect the last saved checkpoint of your training. If you want to restart from a specific checkpoint, you can write the full path to your <code>.ckpt</code> file :</p>\r\n<pre><code>rave train [...your previous training args...] --ckpt /path/to/model/out/version_X/checkpoints/model_XXXXX.ckpt\r\n</code></pre>\r\n<h1>Monitoring your training</h1>\r\n<h3>Connecting to tensorboard</h3>\r\n<p>You can monitor your training using <code>tensorboard</code>, a very useful tool that allows to display several metrics about the current training state, as well as some sound samples. To do this, make another <code>screen</code> (or <code>tmux</code>) and launch the <code>tensorboard</code> command in the root of your training output directory :</p>\r\n<pre><code>screen -S monitor \r\nconda activate RAVE\r\ncd /path/to/model/out\r\ntensorboard --logdir . --port XXXX\r\n</code></pre>\r\n<p>where you replace <code>XXXX</code> by a four-numbered port of your choice. <code>tensorboard</code>, at some point, should give you an address ; if the training is on your computer, you can directly copy and past the given address to your favorite browser. However, if you are using SSH protocol, you will have to bridge the port you gave with the ssh port you connect on. You can redirect a port with the <code>-L</code> keyword ; for exemple, by connecting to your server with</p>\r\n<pre><code>ssh your/ssh/address -L 8080:localhost:8080\r\n</code></pre>\r\n<p>by putting <code>XXXX</code> in the tensorboard command to <code>8080</code>, you should be able to connect on your local network with the <code>127.0.0.1:8080</code> address on your browser.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d3d1a7c830788979b3ac3a25b4983ea9.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h3>Monitoring metrics</h3>\r\n<p>Tensorboard has a <em>Scalars</em> tab, where you are able to monitor diverse current metrics of your training. First, be aware that RAVE's training process has two distinct phases :</p>\r\n<ul>\r\n<li><em>phase 1 : auto-encoding phase</em> - The encoder and the decoder of the model are trained jointly on a spectral loss ;</li>\r\n<li><em>phase 2 : adversarial phase</em> - The encoder is frozen and a discriminator is introduced ; the decoder is fine-tuned with the discriminator using an adversarial training setup.</li>\r\n</ul>\r\n<p>The duration of the first phase is defined by the chosen configuration, but is 1 million batch most of the time.</p>\r\n<p><strong>Monitoring phase 1.</strong> The encoder and the decoder are trained using three losses : the <code>fullband_spectral_distance</code> and <code>multiband_spectral_distance</code>, and <code>regularization</code>. Typically, the two first losses should be regularaly decreasing during this first phase. Losses are plotted by batch, so the curves may be be a little noisy ; do not hesitate to play with the <em>smoothness</em> parameter at the left to see the averaged evolution of your loss.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9729b27dcd32a79d6939e622bfb25b47.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>The <code>regularization</code> loss is an additional term that influences the shape of your latent space. The meaning of this loss depends on the choses regularization strategy in the configuration :</p>\r\n<ul>\r\n<li>with <code>default</code> regularization, this regularization term is the difference between the latent space and an isotropic gaussian. A zero regularization typically means that your latent space is random noise, a degenerated behaviour. Reversely, a very high regularization term means that your latent space is very \"rough\", or bursted around undesirable values.</li>\r\n<li>with <code>wasserstein</code>, this regularization term evaluates how much your latent space ressembles an unit open ball. The difference with the previous one is that it only prevents latent parameters to be very far from this ball, but does not penalize the \"roughness\" of the latent space. This can help the model to reconstruct accurately the input sound, but also to put very different data nearby in the latent space ; be careful if you plan to manipulate latent variables manually.</li>\r\n<li>with <code>spherical</code>, there is no regularization term : latent projections are forced to lie on a multi-dimensional sphere.</li>\r\n</ul>\r\n<p>Besides the training loss, another important metrics for this phase 1 are the <em>latent_pca</em> phases. You should have four of these plots, named <em>latent_pca</em> with a following float number. This plot is an indicator of the latent topology of the model : it is the number of dimensions for a PCA to explain out X% of your dataset's variability, where X is the number after the curve's name. More concretely, let's take the following curve :</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/ee27b45415edf076da188e6bd1fe09df.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>it means that your model needs X dimensions out of 128 to represent 95% of your dataset after dimensionality reduction. Given that RAVE performs the same kind of dimensionality reduction at export, it will also indicate you the number of the dimensions that you will be able to control (by default ; see below). These curves are very useful to obtain some insights about your latent space's diversity : typically, if your dataset is very complex and your model only needs 2 dimensions to explain it out, there might be something wrong with your training. Note that, however, this critierion is very loose ; don't be too anxious about it.</p>\r\n<p><strong>Monitoring phase 2.</strong> The training drastically changes during phase 2 : adversarial training comes into play!</p>\r\n<p>Adversarial training is typically hard to monitor, so let's explain things out a bit. Adversarial training is based on a generator and a discriminator, that are trained in a concurrent way : the discriminator is trained to separate the generator's output to the real data, and the generator is trained to fool it. As training goes by, the discriminator should be able to detect synthesized sound more and more accurately, and thus the generator to synthesize more and more accurate data.</p>\r\n<p>Adversarial training is a very efficient training scheme, but may be very unstable and is difficult to monitor : indeed, the adversarial loss is not indicating at all if the model has been accurately trained, but rather the balance between the discriminator and the generator. Hence, if the discriminator has a very little loss (it detects every time the synthesized data), it can mean that the generator is very very bad, or discriminator very very powerful (and reversely).</p>\r\n<p>Typically, the spectral losses used in phase 1 will suddenly jump ; do not worry, this is a normal behavior, as the introduction of the discriminator in the training process is perturbing the system a little bit. Once your model has reached phase 2, you should be able to monitor new losses in your <code>tensorboard</code>: <code>adversarial</code>, <code>pred_fake</code>, <code>pred_real</code>, <code>loss_dis</code>, and <code>feature_matching</code>.</p>\r\n<ul>\r\n<li><code>adversarial</code> is the adversarial loss of the model's decoder : a low value means that the decoder manages to fool the discriminator.</li>\r\n<li><code>loss_dis</code> is the adversarial loss of the model's discriminator : a low value means that the discriminator separates accurately real from synthesized data.</li>\r\n<li><code>pred_real</code> and <code>pred_fake</code> show the centroid of respectively the real and synthesized data in the last layer's embedding of the discriminator. The discriminator is typically trained to separate both in a binary way ; hence, these centroids are trained to be very separated.</li>\r\n<li><code>feature_matching</code> is an additional term used by the generator to help it match the real data statistics within discriminator's internal layers. This help is double-edged though : if the discriminator is bad, so will be this loss.</li>\r\n</ul>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/de39b7efb50be097094912c9d3dd5f1d.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h3>When should I stop the training?</h3>\r\n<p><strong>Listen to your sounds!</strong> Adversarial training is very difficult to monitor, especially with audio where mathematimatical distances may be very far away from the actual perceived quality, the best is simply to listen to reconstructions. For this, go into the <em>Audio</em> tab of the tensorboard, and listen to the <em>audio_val</em> samples. The audio extract will make you hear, consecutively, the original and the reconstructed sound. If your target audio has several channels, they will be unfolded to be also placed consecutively. The reconstructed samples come from the <em>validation</em> set, meaning that they were not part of the training data : this is helping to also evaluate the <em>generalization</em> abilities of your model.</p>\r\n<p><strong>Under-fitting vs. over-fitting</strong>. In generative machine learning, we teach models to reproduce a given set of data. In tasks such as recognition or classification, we typically want our models to perform with unknown data as well as with training data : if we want to distinguish meowing sounds from barking sounds, we do not want an alarm behind to mislead our model. To this end, a good practise is to retrieve a little bit of data from our training set, and evaluate our model regularly on this unseen data. As neural models are very powerful, they are very prone to a phenomenon called <em>over-fitting</em> : fitting so much the data that they may be over-sensible to trivial details, and then failing to perform correctly on unseen data. The opposite is, without any suspense, <em>under-fitting</em> : the model performs quite poorly, but with similar results on seen and unseen data.</p>\r\n<p>With some machine learning tasks, we then observe how our model performs with unseen data, and stop training as soon as the model start to overfit. However, things are a little different in generative learning : we aim to <em>model</em> a given distribution. In some applications generative modelling should be able to model unknown data, such as neural codecs. In some tasks though, such as <em>timbre transfer</em>, the objective is not so clear : overfitting would not prevent a creative use, it can be quite the opposite. However, <em>in-domain</em> over-fitting may be a good measure of your model's steadiness : for example, if you model violin sounds, and that your model fails drastically at re-generating them, it may indicate that your training went wrong as it means that the model's internal representation is degenerated. This can be observed by observating the <em>validation</em> loss on the tensorboard : typically, if you observe repetitive spikes and unstable evolution, it can mean that it is time to stop training your model.</p>\r\n<p><strong>Starting again from phase 2.</strong> Phase 2 may be particularly unstable, and its performance may depend on the chosen architecture, but also to the nature of your sounds. You may start again from a checkpoint (using <code>--ckpt</code>, see above) that finished the phase 1 to override some adversarial parameters. For example, adjusting the update periodicity of the discriminator can help if the discriminator struggles to differentiate real from synthetic data : to do that, re-start your training by adding <code>--override \"model.RAVE.update_discriminator_every = 4\"</code>to your command. To access all the parameters of your training, you can check the <code>config.gin</code> at the root of your model's output directory.</p>\r\n<h1>Exporting the model</h1>\r\n<p>You're happy with your model? Time to export it! To do that, use the <code>rave export</code> command :</p>\r\n<pre><code>rave export --run path/to/model --name your_model_name --output /path/to/save/exported/model --streaming True\r\n</code></pre>\r\n<p>Let's explain a little bit of these options.</p>\r\n<ul>\r\n<li><code>--run</code> indicates the path of your model checkpoint. You can put the base folder, a specific version, or a specific <code>.ckpt</code> file.</li>\r\n<li><code>--name</code> is the name of the exported model ; it is arbitrary, name it like you want!</li>\r\n<li><code>--output</code> is the output directory of the exported model ; by default, it will be placed in the <code>--run</code> folder.</li>\r\n<li><code>--streaming</code> is very important : put it to <code>True</code> if you're planning to use the model with <code>nn~</code> or RAVE VST! If you don't your model will be clicking, as the convolution layers will not cache the streamed data.</li>\r\n</ul>\r\n<p>At the end, you should have your scripted model (with <code>.ts</code> extension) in the output path. That's all!</p>\r\n<p><strong>Setting model's dimensionality.</strong> Some additional options are available to set up manually the number of controllable dimensions : this will influence, for example, the amount of dimensions of <code>encode</code> and <code>decode</code> functions in <code>nn~</code>. There are two ways to set up the number of dimensions :</p>\r\n<ul>\r\n<li><code>--fidelity</code> : automatically finds the number of dimensions to explain out the given amount of data in the latent space (with PCA, see above). For example, giving <code>--fidelity 0.98</code> will retrieve the number of latent dimensions required to explain 98% of the dataset variance.</li>\r\n<li><code>--latent_size</code> : sets a fixed amount of accessible latent dimensions. For example, <code>--latent_size 8</code> will enforce 8 latent inputs / outputs in <code>nn~</code>, whatever the dataset variance it describes.</li>\r\n</ul>\r\n<p>Well, that's it! If you encounter bugs during training, do not hesitate to put an issue on the <a href=\"https://github.com/acids-ircam/RAVE/issues\">RAVE github</a>, or to find some help among other users on the <a href=\"https://discord.gg/BS9TtWAX\">RAVE discord</a>. Keep in touch!</p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 674,
                "name": "neural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20182,
            "forum_user": {
                "id": 20174,
                "user": 20182,
                "first_name": "Axel",
                "last_name": "Chemla-Romeu-Santos",
                "avatar": "https://forum.ircam.fr/media/avatars/Photo.jpg",
                "avatar_url": "/media/cache/f7/78/f778be374ea22ae4fcea1834f753924b.jpg",
                "biography": "Based in Paris, France, Axel Chemla—Romeu-Santos works a researcher, composer, and performer in various fields such as music, theater, and artificial intelligence. After a double undergraduate degree in Engineering Sciences & Music Theory, he specialized in acoustics and computer music at IRCAM. Always looking for creativity through technology, he initiated a PhD between IRCAM (Paris) and LIM (Milano) on the creative uses of generative artificial intelligence for sound synthesis. After graduation, he continued a research & creation approach to artificial intelligence, working both on scientific papers on AI creativity, and experimental musical pieces exploring diverse aspects of these technologies (such as network bending, real-time improvisation, and composition). \nBesides institutional works, he also work as musician and composer for the company Théâtre de la Suspension, is co-founder of the w.lfg.ng collective, member of the maximalist electronic music band Daim™, and has his personal project Kenoma.",
                "date_modified": "2025-10-21T19:56:31.408648+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 626,
                        "forum_user": 20174,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-18",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "chemla",
            "first_name": "Axel",
            "last_name": "Chemla-Romeu-Santos",
            "bookmarks": []
        },
        "slug": "training-rave-models-on-custom-data",
        "pk": 2870,
        "published": true,
        "publish_date": "2024-03-21T12:31:22+01:00"
    },
    {
        "title": "Nouvelles de l'équipe de l'ISMM - Frédéric Bevilacqua, Diemo Schwarz, Riccardo Borghesi, Benjamin Matuszewski",
        "description": "Nouvelles de l'équipe ISMM : Mubu, CataRT, SkataRT, Gestural Sound Toolkit for Max, Soundworks for Javascript.",
        "content": "<p><span><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par : Fr&eacute;d&eacute;ric Bevilacqua, Diemo Schwarz, Riccardo Borghesi, Benjamin Matuszewski<br /><a href=\"https://forum.ircam.fr/profile/bevilacq/\">Biographie Fr&eacute;d&eacute;ric Bevilacqua</a><br /><a href=\"https://forum.ircam.fr/profile/schwarz/\">Biographie Diemo Schwarz&nbsp;&nbsp;<br /></a><a href=\"https://forum.ircam.fr/profile/borghesi/\">Biographie Riccardo Borghesi<br /></a><a href=\"https://forum.ircam.fr/profile/matuszewski/\">Biographie Benjamin Matuszewki</a></p>\r\n<p></p>\r\n<p>Nous pr&eacute;senterons les nouvelles fonctionnalit&eacute;s du framework <strong><a href=\"https://forum.ircam.fr/projects/detail/mubu/\">MuBu for Max</a></strong> pour l'analyse multimodale du son et du mouvement, la synth&egrave;se sonore interactive et l'apprentissage automatique, les outils de synth&egrave;se &agrave; base de corpus&nbsp;<a href=\"https://forum.ircam.fr/projects/detail/catart-mubu/\"><strong>CataRT</strong></a> et&nbsp;<strong><a href=\"https://forum.ircam.fr/collections/detail/skatart/\">SKataRT</a></strong> pour Max et Ableton Live, le&nbsp;<a href=\"https://forum.ircam.fr/projects/detail/gestural-sound-toolkit/\"><strong>Gestural Sound</strong><span><span>&nbsp;</span></span><strong>Toolkit</strong></a> pour le prototypage de sc&eacute;narios d'interaction geste-son, et la nouvelle version du framework&nbsp;<a href=\"https://forum.ircam.fr/projects/detail/soundworks/\"><strong>Soundworks</strong></a> pour JavaScript avec des tutoriels. Nous pr&eacute;senterons &eacute;galement diff&eacute;rentes performances et d&eacute;monstrations li&eacute;es &agrave; l'&eacute;quipe ISMM pendant le Forum (Pr&eacute;lude mardi soir et DOTS vendredi apr&egrave;s-midi).</p>\r\n<p></p>\r\n<p><span><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></span></p>",
        "topics": [
            {
                "id": 60,
                "name": "Catart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1865,
                "name": "catart-mubu",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 639,
                "name": "ISMM Team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 752,
                "name": "javascript",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 61,
                "name": "Mubu",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 632,
                "name": "Skatart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 751,
                "name": "soundworks",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 36,
            "forum_user": {
                "id": 36,
                "user": 36,
                "first_name": "Diemo",
                "last_name": "Schwarz",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9bf9105c2fbdb55023f9437ac99a6630?s=120&d=retro",
                "biography": "Diemo Schwarz is a researcher at IRCAM, and a musician and creative programmer. He performs on his own digital musical instrument based on his CataRT open source software, exploring different collections of sound with the help of gestural controllers that reconquer musical expressiveness and physicality for the digital instrument, bringing back the immediacy of embodied musical interaction to the rich sound worlds of digital sound processing and synthesis.\nHe interprets and performs improvised electronic music as member of the ONCEIM improvisers orchestra, ensemble Ikosikaihenagone, and various other musicians, and he composes for dance and performance, video, and installation.\nHis scientific research on sound analysis/synthesis and gestural control of interaction with music is the basis of his artistic work, and allows to bring advanced and fun musical interaction to expert musicians and the general public.\nIn 2017 he was DAAD Edgar-Varèse guest professor for computer music at TU Berlin, and in 2022 artist in residence in the Arts, Sciences, Societies fellowship program of IMéRA institute of advanced studies, Aix–Marseille Université.",
                "date_modified": "2026-02-24T12:21:32.536216+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 397,
                        "forum_user": 36,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-19",
                        "type": 0,
                        "keys": [
                            {
                                "id": 7,
                                "membership": 397
                            },
                            {
                                "id": 9,
                                "membership": 397
                            },
                            {
                                "id": 13,
                                "membership": 397
                            },
                            {
                                "id": 21,
                                "membership": 397
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "schwarz",
            "first_name": "Diemo",
            "last_name": "Schwarz",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 329,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 257,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 496,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 38,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 1045,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 299,
                    "user": 36,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "news-from-the-ismm-team-frederic-bevilacqua-diemo-schwarz-riccardo-borghesi-benjamin-matuszewski",
        "pk": 2797,
        "published": true,
        "publish_date": "2024-03-04T16:16:38+01:00"
    },
    {
        "title": "DAFNE+: Latest advances by Hugues Vinet and Greg Beller",
        "description": "DAFNE+ offre aux créateurs de contenus numériques de nouveaux moyens de créer, distribuer et monétiser leurs œuvres d'art grâce aux technologies blockchain.  Cette présentation, donnée lors des ateliers du Forum IRCAM @Paris 2025, présente  les récentes avancées du projet européen et de sa plateforme, notamment sur les nouvelles fonctionnalités telles que la DAO (Distributed Autonomous Organisation) et l'asset versioning. Elle introduit les événements de cette session liés à DAFNE+ - 3 ateliers et le concours de modèles RAVE.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<h1>DAFNE+ Platform - Share your content</h1>\r\n<p>DAFNE+ Platform, a cutting-edge NFT platform crafted to push forward the community of artists, designers, and musicians.</p>\r\n<p>DAFNE+ Platform is designed to address the evolving needs of digital content creators, providing them with innovative tools for creation, distribution, and monetization of their artistic works through blockchain technology.&nbsp;&ldquo;One of the main purposes of the project is to make content distribution fair&rdquo;.&nbsp;</p>\r\n<p>In an intuitive and simple way, without the need for technical knowledge in blockchains/NFTs, creative communities are invited to join the decentralized autonomous organization (DAO) offering new services and tools that allow the creation and co-creation of content in a blockchain. DAFNE+'s research also focus on the definition of new business models through the distribution of content, allowing creators and users to monetize multimedia creations.&nbsp;<br />IRCAM&rsquo;s role in DAFNE+ is in particular to organise a community of artists and technology providers on electronic music and sound. Halfway between IRCAM's Forum and Sidney the interactive music repertoire archive, and based on an autonomous organization and distributed infrastructure, the platform enable artists, researchers and engineers to share and monetize pieces of technology for producing music and performing works - libraries, patches, documentations...</p>\r\n<ul>\r\n<li>Website:&nbsp;<a href=\"https://dafneplus.eu/\">https://dafneplus.eu</a></li>\r\n<li>Platform:&nbsp;<a href=\"https://dafneplus.eng.it/\">https://dafneplus.eng.it</a></li>\r\n<li>Discord:&nbsp;<a href=\"https://discord.gg/aR6VvV9Ttw\">https://discord.gg/aR6VvV9Ttw</a></li>\r\n<li>Survey: <a href=\"https://forms.gle/2LcB5owCHJteZFub6\">https://forms.gle/2LcB5owCHJteZFub6</a></li>\r\n<li>YT tutorials playlist:&nbsp;<a href=\"https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ\">https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ</a></li>\r\n<li>Newsletter:&nbsp;<a href=\"https://dafneplus.eu/contact\">https://dafneplus.eu/contact</a></li>\r\n<li>Contact:&nbsp;<a href=\"mailto:info@dafneplus.eu\">info@dafneplus.eu</a></li>\r\n<li>Workshop: <a href=\"https://forum.ircam.fr/article/detail/dafne-workshop-minting-and-versionning-content-on-the-platform-with-hugues-vinet-greg-beller-and-guillaume-piccarreta/\">https://forum.ircam.fr/article/detail/dafne-workshop-minting-and-versionning-content-on-the-platform-with-hugues-vinet-greg-beller-and-guillaume-piccarreta/</a></li>\r\n</ul>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: 85px; top: 429.594px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1255,
                "name": "EU project",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1856,
                "name": "platform",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dafne-latest-advances",
        "pk": 3324,
        "published": true,
        "publish_date": "2025-03-05T21:59:33+01:00"
    },
    {
        "title": "Sound Software in Creation and Industrialization by AKL artists & Guillaume Piccarreta (Ircam)",
        "description": "This event will explore the multifaceted applications of sound software used by sound artists and art companies from France and Korea for artistic creation and business model development.",
        "content": "<p style=\"font-weight: 400;\"><span>This session is an opportunity to share examples of sound software used by sound artists and art company from France and Korea for artistic creation and business model development. We aim to discuss ways to utilize this softwares not only as a tool or business model but also as part of artistic works, exploring its multifaceted applications. Through the insights of three active professionals and one moderators in the sound field, we will share the direction in which sound software can advance in the realms of art and industry.</span></p>\r\n<p style=\"font-weight: 400;\"><strong><span>Participants :</span></strong></p>\r\n<p style=\"font-weight: 400;\"><span>(Moderator) </span></p>\r\n<p style=\"font-weight: 400;\"><strong><span>Eunhee Cho</span></strong><span>, Sound Artist and Composer</span></p>\r\n<p style=\"font-weight: 400;\"><span>(Participant)</span></p>\r\n<p style=\"font-weight: 400;\"><strong><span>Guillaume Piccarreta</span></strong><span>, Developer, IRCAM</span></p>\r\n<p style=\"font-weight: 400;\"><strong><span>Seungsoon Park</span></strong><span>, Co-CEO, NEUTUNE</span></p>\r\n<p style=\"font-weight: 400;\"><strong><span>Pyoungryang Ko</span></strong><span>, Composer</span></p>",
        "topics": [],
        "user": {
            "pk": 86096,
            "forum_user": {
                "id": 85993,
                "user": 86096,
                "first_name": "Karin",
                "last_name": "Laenen",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/65d11482a61a673c06dbdcf4cb9d156b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-04T16:45:07.346631+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 944,
                        "forum_user": 85993,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 656,
                                "membership": 944
                            },
                            {
                                "id": 657,
                                "membership": 944
                            },
                            {
                                "id": 846,
                                "membership": 944
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "laenen",
            "first_name": "Karin",
            "last_name": "Laenen",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 86096,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "using-sound-software-in-creation-and-industrialization",
        "pk": 3088,
        "published": false,
        "publish_date": "2024-10-30T15:16:32+01:00"
    },
    {
        "title": "XP 1.12",
        "description": "Presented during the IRCAM Forum @NYU 2022\r\n\r\nAfter a first version released in 2021, xp4l goes on its way and has since taken the name of XP, in a recent update (1.11)\r\n\r\nA corrective update has just been released, and on this occasion, this article comes back on the principles of the environment, discusses the evolutions between xp4l and XP, and evokes the feedback from concrete use cases",
        "content": "<p><strong>Flashback</strong></p>\r\n<p>xp4l was born from the will to make the ircam spat library available in a flexible and dynamic format from the popular ableton daw, giving it also a 3d visualization interface that can contribute to extend the possibilities of interaction with the sound field.&nbsp;<br />This first version has taken up several challenges, as the spat obeys to instrancing conditions that make it complicated, but not impossible, to integrate it with ableton's Live api.</p>\r\n<p>This first version allowed to explore several directions, and to draw perspectives to improve its functioning.</p>\r\n<p>XP</p>\r\n<p>Since the release of xp4l, to better respond to these challenges and offer more durability to the project, xp4l has been almost entirely redesigned, whether it is the standalone, or the m4l devices. If the system keeps the same workflow, inside, the way it instanciates the devices and the parameters overall works differently</p>",
        "topics": [
            {
                "id": 207,
                "name": "Ableton",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 752,
                "name": "javascript",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 900,
                "name": "spatialaudio ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 901,
                "name": "xp4l",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1709,
            "forum_user": {
                "id": 1707,
                "user": 1709,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profil4.png",
                "avatar_url": "/media/cache/49/37/4937ce84289a16db6f9d5ea374376dfb.jpg",
                "biography": "Fraction (Eric Raynaud) is a new media, composer and sound artist whose work focuses in particular on immersive and audiovisual experience  design.\n\nHis practice has developed from a background in music composition and spatial sound which led him to put together complete skills in the field of new media art. He now devotes his time writing and producing pieces integrating digital materials of different kinds.  He is particularly interested in forms of experience that have strong interactions between generative art and sonic matter. Combining complex scenography and hybrid digital writing with visuals, sound and physical media, he aims in particular to forge links between contemporary art and digital scope within the frame of radical experiences.\n\nFascinated by sound intensity, energy, ecstasy, and the idea of \"being able to sculpt digital disorder as a raw matter\", he finds in the lexicon of sound spatialization the appropriate field for designing atypical pieces, placing at the center of his writing the immediate physical and emotional experience.",
                "date_modified": "2025-12-29T12:55:11.027970+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fraction",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "xp-112",
        "pk": 1314,
        "published": true,
        "publish_date": "2022-09-13T12:23:53+02:00"
    },
    {
        "title": "Sonifying The Powder Toy by Kieran McAuliffe",
        "description": "I will demonstrate my musical sonification for The Powder Toy, a \"sandbox\" game in which players experiment with powdered substances in a rich physical and chemical simulation.  The addition of a sound engine enhances the emergent properties of The Powder Toy, and provides a unique interface for interacting with granular sound textures.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e10816d1e34ce3432ee8673998000311.png\" width=\"583\" height=\"533\" /></p>\r\n<p>Presented by : Kieran Mcautolife</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/mcaulibk/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p><span>The &ldquo;falling sand&rdquo; genre of games provide a unique &ldquo;sandbox&rdquo; experience to players, encouraging curiosity and creativity. Players experiment with a variety of powdered elements which are subjected to a detailed physics system and may react chemically with each other upon collision.&nbsp; These games notably lack sound, likely due to the intense computation this would require.&nbsp; This led me to developing a system for sonifying one of the most feature-rich &ldquo;falling sand&rdquo; games, The Powder Toy.&nbsp;&nbsp;</span></p>\r\n<p><span>I wanted my system to both enhance the emergent experience of playing a &ldquo;falling sand&rdquo; game and provide a unique interface for exploring granular sound.&nbsp; It uses stochastic frequency modulated granular synthesis to map a distribution of sound grains to each distribution of powdered elements, using a mixture of data-driven and manual mapping.&nbsp; To implement this sonification, I used my own luagran~ MaxMSP external which receives distribution data from a forked build of The Powder Toy.&nbsp; I created additional sonification algorithms for the vegetation and electronics systems in The Powder Toy using simpler techniques.</span></p>\r\n<p></p>",
        "topics": [
            {
                "id": 406,
                "name": "Game",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1808,
                "name": "granular synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1820,
                "name": "interactive live electronics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 703,
                "name": "Sonification",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 36385,
            "forum_user": {
                "id": 36336,
                "user": 36385,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/B_Kieran_McAuliffe_-_Head_shot.jpg",
                "avatar_url": "/media/cache/b7/29/b729e5059ddb494f740ab4aa4116bbd6.jpg",
                "biography": "Researcher and artist Kieran McAuliffe investigates human interaction with media in a variety of settings.  He currently works at the Hamburg University of Applied Science researching auditory illusions in virtual reality, and as a staff programmer at the Ligeti Center.  Formerly, he received a DMA from the University of Cincinnati College Conservatory of Music, where he researches and develops probability based software for use by digital artists.  In his free time Kieran performs as a jazz guitarist and develops a fighting game.",
                "date_modified": "2026-02-18T09:41:50.423324+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1059,
                        "forum_user": 36336,
                        "date_start": "2025-01-20",
                        "date_end": "2026-01-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "mcaulibk",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sonifying-the-powder-toy",
        "pk": 3186,
        "published": true,
        "publish_date": "2024-12-28T08:47:35+01:00"
    },
    {
        "title": "nn/mémoire: Embodied Latent Space Walk by Jiatong Liu",
        "description": "Exploring expressive manipulation of machine-learning models as a sound-design tool for storytelling. Drawing on cultural soundscapes, nn/mémoire provides an alternative lens to artifical intelligence, memory and heritage.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b1c93dc64a92ca9f447d8f570ebc572e.png\" width=\"793\" height=\"712\" /></p>\r\n<p>Presented by: Jiatong Liu</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/jtisafish/\" title=\"Biography\">Biography</a></p>\r\n<p><strong>Rationale</strong></p>\r\n<p>Sound is a time-based media - the act of recording captures moments past. Site specific field recordings encapsulate the moment of time in that particular space and the story behind it.</p>\r\n<p>nn/memoire grasps the temp-aurality of Beijing Hutong by recording its cultural soundscapes - sounds of community activities popular from the 1980s to today that are organically shifting and slipping away.</p>\r\n<p>Modifying such recordings deforms and alters our perception of memory and past time. Some memories are lost, while others float to surface through the uncanny experience of remembrance.</p>\r\n<p>The project uses Machine Learning as a form of sound design to alter our perception of Hutong soundscapes and memory, curating an artificial practice of remembrance.</p>\r\n<p><strong>Artistic Approach</strong></p>\r\n<p>By collecting a small dataset of Hutong soundscapes, I mapped the sounds into an immersive environment. The player&rsquo;s movement (trajectory, velocity, direction) generates the audio in the space forwards, backwards and sideways, creating an eery virtual soundwalk of Hutong sounds.</p>\r\n<p><strong>Machine Learning as Sound Design</strong></p>\r\n<p>The project inquires the potential of Machine Learning in producing novel musical affordances as a form of sound design in storytelling. Here, by altering the inference of a VAE model, I reproduced audio textures reminiscent to snippets of memory.</p>\r\n<p>Moreover, the spatial mapping of audio - made possible with the chosen Machine Learning technology - explores embodiment in Human-Computer Interaction.</p>\r\n<p><strong>Technical Implementation</strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d2b4a7b161896d70dd22f8548cb70398.png\" /></p>\r\n<ul>\r\n<li><strong>Making the audio terrain</strong>: applying Expressive Manipulation to a VAE model using Latent Terrain Synthesis, creating a 2D audio terrain</li>\r\n<li><strong>Polishing the audio terrain</strong>: development of helper tools and techniques to eliminate unwanted audio artefacts in the audio terrain, from a latent vector level and audio engine level</li>\r\n<li><strong>Matching the audio terrain</strong>: creating an immersive environment in Unreal Engine with local 3D scans, with 2D coordinates matching the audio terrain</li>\r\n<li><strong>Connecting the audio terrain</strong>: connecting Max and Unreal engine with OSC</li>\r\n</ul>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 910,
                "name": "field recordings",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2487,
                "name": "heritage",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4071,
                "name": "ircam forum workshops 2026",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1745,
                "name": "nn~",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4072,
                "name": "sound walk",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 118906,
            "forum_user": {
                "id": 118750,
                "user": 118906,
                "first_name": "Jiatong",
                "last_name": "Liu",
                "avatar": "https://forum.ircam.fr/media/avatars/Screenshot_2025-12-28_at_22.43.37.png",
                "avatar_url": "/media/cache/be/54/be548ae26e0facbecccee2ec2500122b.jpg",
                "biography": "Jiatong Liu is a London-based artist-researcher whose work explores musical affordances, embodied interaction, and immersive computational systems. They hold an MSc in Creative Computing from the Creative Computing Institute, University of the Arts London, with a thesis on real-time embodied human–AI interaction.\n\nThey work as a creative technologist and composer across computational performance, games, and film production. Recent credits include composing for the award-winning indie game 'I Write Games Not Tragedies' and developing a computational arts performance presented at Camden People’s Theatre (2024). They currently work as a creative technologist, liaising between VFX, software development, and audiovisual generation within large-scale film production pipelines.",
                "date_modified": "2026-01-24T15:36:11.021609+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jtisafish",
            "first_name": "Jiatong",
            "last_name": "Liu",
            "bookmarks": []
        },
        "slug": "nnmemoire-embodied-latent-space-walk-by-jiatong-liu",
        "pk": 4259,
        "published": true,
        "publish_date": "2026-01-27T17:00:00+01:00"
    },
    {
        "title": "DAFNE+ workshop - Hands on Reality Check with Miller Puckette",
        "description": "Reality Check is a framework for protecting an ongoing music production using a continuous integration (CI) paradigm. The benefits are at least two-fold: the pieces that are included in the CI system can be monitored for their continued viability; and also, the various software components used in their realization can use the pieces as unit tests to ensure their own continued back-compatiblity.\r\n\r\nDuring this workshop, Miller Puckette will present the Reality Check environment and its latest developments. Practical illustrations will help you get to grips with the software. Bring your Pd patch and see for yourself the benefits of this innovative method for graphic programming of music",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<p><em>&nbsp;</em></p>\r\n<p>Link to the survey:&nbsp;<a href=\"https://forms.gle/xdiXNhxJrMZJQrPq9\">https://forms.gle/xdiXNhxJrMZJQrPq9</a></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Acknowledgments:</strong></p>\r\n<p>This work is supported by IRCAM and by the<span>&nbsp;</span><a href=\"https://dafneplus.eu/\">DAFNE+ project</a><span>&nbsp;</span>under Horizon Europe Grant Agreement number 101061548.</p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/user/71659fecb1f12b1b9516b55cf97d4919.png\" alt=\"\" width=\"1250\" height=\"703\" /></p>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: 48px; top: 81.7726px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 2704,
                "name": "CI",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 328,
                "name": "Pd",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dafne-workshops-hands-on-reality-check",
        "pk": 3318,
        "published": true,
        "publish_date": "2025-03-05T11:52:35+01:00"
    },
    {
        "title": "ASMR Dream Scenario Roleplay #BrainScratches",
        "description": "Where is the point at which seduction turns to repulsion, internal to external, order to chaos? \n",
        "content": "<p>This live performance was made on Ableton using a granular synthesiser plugin coded by the artist.&nbsp;<br>@faebia_______</p>",
        "topics": [
            {
                "id": 207,
                "name": "Ableton",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1915,
                "name": "Asmr",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 79,
                "name": "Max8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55154,
            "forum_user": {
                "id": 55091,
                "user": 55154,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/935cfa0bfe444fcaa4410409b216c1e9?s=120&d=retro",
                "biography": "Fabia Martin is an interdisciplinary conceptual artist working between sound, installation and performance art. Currently she is researching and creating work around the fundamental question of how to exist as a human/body in the post-digital age. How, as a chaotic system of unpredictable actions, emotional outbursts and leaking holes, can we find points of connection with the sleek interfaces and user-friendly experiences presented by technology?\n\nRecently, she has been performing an offbeat ASMR piece titled \"ASMR Wet Mouth Sounds, Slime #BrainScratches\". The performance comments on the innate feelings of desire towards that which sits on the border of attraction and repulsion, and the internet's role in facilitating these experiences without shame. The chaos that ensues as the performance ultimately breaks down prompts a questioning of the security of these boundaries.",
                "date_modified": "2024-04-03T19:22:19.249912+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fabiamartin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "asmr-dream-scenario-roleplay-brainscratches",
        "pk": 2842,
        "published": true,
        "publish_date": "2024-03-19T18:05:40.009549+01:00"
    },
    {
        "title": "Projet test",
        "description": "Projets test",
        "content": "<p>projet test</p>",
        "topics": [],
        "user": {
            "pk": 17665,
            "forum_user": {
                "id": 17661,
                "user": 17665,
                "first_name": "Liz",
                "last_name": "Gorsen",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/48e15267e5be2206fe65e79e8e39f870?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lizgorsen",
            "first_name": "Liz",
            "last_name": "Gorsen",
            "bookmarks": []
        },
        "slug": "projet-test",
        "pk": 959,
        "published": false,
        "publish_date": "2021-06-08T10:24:21.132565+02:00"
    },
    {
        "title": "The wind wuthering-  Shuning DIAO",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>This work is constructed with a series of customized 埙(Xun), which is a traditional Chinese instrument. These Xun will look like beautiful, colorful vases with holes, and when the wind goes through these holes, a sharp whistle will sound. Poems are also complemented with whistle sounds that tell the stories of women as read by feminist poets. The vases are made to depict the women's body figures, as they may sing along with their own stories.</p>",
        "topics": [],
        "user": {
            "pk": 27368,
            "forum_user": {
                "id": 27340,
                "user": 27368,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/41bf2f9d7cc6995b01cb1c5447ca908c?s=120&d=retro",
                "biography": "Shuning Diao is an installation artist and a researcher. After finishing her bachelor's degree in Philosophy, Politics and Economics from Renmin University of China, she is currently studying Information Experience Design at the Royal College of Art. Her works mainly focus on feminism, labour's rights, and body politics. She creates interactive or traditional art installations to build interactions between her work, the audience, social issues, and the artist herself.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "shuningdiao",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-wind-wuthering",
        "pk": 2084,
        "published": true,
        "publish_date": "2023-02-24T17:26:52+01:00"
    },
    {
        "title": "Linksys Velop Configuration: Step By Step Guide ",
        "description": "Linksys velop mesh system is used by people with larger home or multi story space. Velop mesh devices are dual band or tri band frequency. Also has the latest features like guest network and parental control. ",
        "content": "<p><span style=\"\">Linksys velop mesh system is used by people with larger home or multi story space. Velop mesh devices are dual band or tri band frequency. Also has the latest features like guest network and parental control. Latest technology is used to create linksys velop mesh devices. Linksys Velop Configuration at home is very beneficial. You can set up the mesh system in your space by yourself. With adequate information you can set up the device very easily using the linksys app.&nbsp;</span></p>\n<h1><strong>Linksys Velop Mesh Setup</strong></h1>\n<p><span style=\"\">You have two ways to complete the configuration. The first method is via app and the second one is via web. <a href=\"https://linksys-wifi.com/linksys-velop-setup/\">Linksys Velop Configuration </a>using linksys app is a very convenient way to upgrade your home network. Check out the step by step guide to setup the linksys velop mesh:</span></p>\n<h2><strong>Download The App</strong></h2>\n<p><span style=\"\">Firstly, download the linksys app on your mobile device from google play store or app store as per the device operating system you have. Afterwards search for linksys app and download it in your system.&nbsp;&nbsp;</span></p>\n<h2><strong>Position The Node</strong></h2>\n<p><span style=\"\">Decide the position of velop nodes you have. While positioning be careful to keep it within the range and away from distractions.&nbsp;</span></p>\n<h2><strong>Connect To Host Device</strong></h2>\n<p><span style=\"\">Put the primary node closer to the host networking device and Inject the ethernet cable into the primary node. Next insert the same cable into the host network to connect them.&nbsp;</span></p>\n<h2><strong>Power up</strong></h2>\n<p><span style=\"\">Turn on the power of the velop and let it boot up properly. Booting may take a while. Wait for the boot up to end before moving ahead.</span></p>\n<h2><strong>Connect&nbsp;</strong></h2>\n<p><span style=\"\">Connect your mobile phone with the velop node network. Access the wireless settings of your mobile device to establish connection.&nbsp;</span></p>\n<h2><strong>Launch App</strong></h2>\n<p><span style=\"\">Launch the linksys app you recently downloaded and then login to the app with default information. Default details are specified on the user manual you get along with the device.</span></p>\n<h2><strong>Configure&nbsp;</strong></h2>\n<p><span style=\"\">You&nbsp; have to initiate the configuration by selecting language, date &amp; timing. Further the app will guide you to configure the velop node and firstly make changes in your Wi-Fi name and password for the velop network. Set up guest access if you want to for the visitors.&nbsp;</span></p>\n<h2><strong>Additional Node Setup</strong></h2>\n<p><span style=\"\">Afterwards add the additional nodes into the network. Position the additional nodes strategically and then connect devices to Wi-Fi with WPS. As all additional nodes are added to the network, move further and block the internet with parental controls.</span></p>\n<h2><strong>Finalize</strong></h2>\n<p><span style=\"\">Finalize the setup by selecting the submit button. In the end you have to test your internet connection speed. Hence your linksys velop mesh Setup is ready to use and you can enjoy a very high speed network everywhere at your home.&nbsp;</span></p>\n<h1><strong>Web Based Linksys Velop Mesh Setup</strong></h1>\n<p><span style=\"\">Setting up linksys velop mesh network via web is super easy if you follow the guide given here. These instructions will help you complete the linksys velop configuration without any trouble.&nbsp;</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Initiate the process with setting up the primary node closer to the host network device.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">After that connect both the devices using the ethernet cable and then power up the primary node.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">Select a client device (computer/ laptop) to move further and connect it with the node network.</span></li>\n<li style=\"\"><span style=\"\">Launch a web browser and specify IP address to open the linksys velop setup page access.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">Specify the information to access the admin dashboard.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">Follow the on screen instructions and configure internet settings.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">Afterward make changes in the node network name and password.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">Then you have to add the additional nodes to the network one by one.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">Enable security settings and click save button to save changes.&nbsp;</span></li>\n<li style=\"\"><span style=\"\">Prior to submitting the changes, don't forget to check the firmware of velop nodes and update if needed.</span></li>\n</ul>\n<h1><strong>Conclusion</strong></h1>\n<p><span style=\"\">Linksys Velop Configuration is a very important step. You have to follow the instructions specified above to complete the setup by yourself. The above given guide will help you in installing the device hardware, configure its settings and boost network performance. In case you have any kind of doubt then you can communicate with technical experts for help. An expert will assist you with the configuration process and also in resolving the problem you are facing by applying significant troubleshooting tips.</span></p>\n<p><br><br></p>",
        "topics": [],
        "user": {
            "pk": 166337,
            "forum_user": {
                "id": 166101,
                "user": 166337,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6cd3e0429a12411375942858c0c064f4?s=120&d=retro",
                "biography": "Linksys Velop configuration refers to the process of setting up and managing a Linksys Velop mesh WiFi system to ensure seamless internet connectivity across your home or office. It involves connecting Velop nodes to a modem, configuring network settings, and optimizing coverage using the Linksys app or web interface. This configuration helps eliminate dead zones, improve speed, and create a stable, secure wireless network.",
                "date_modified": "2026-04-01T12:24:42.835700+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jenniferdavis0676",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "linksys-velop-configuration-step-by-step-guide",
        "pk": 4571,
        "published": false,
        "publish_date": "2026-04-01T12:28:10.418092+02:00"
    },
    {
        "title": "DAFNE+ : Blockchain, NFT and DAO for electronic music",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><strong>\"DAFNE+ provides digital content creators new forms of creation, distribution and monetization of their works of art through blockchain technology. \"</strong><br class=\"\" /><br class=\"\" /><span>A new international research and innovation project supported by the European Union (Horizon program), the DAFNE+ platform for fair creative content distribution will empower creators and communities though new digital distribution models based on digital&nbsp;tokens. In an intuitive and simple way, without the need for technical knowledge in blockchains/NFTs, creative communities are invited to join the decentralized autonomous organization (DAO) offering new services and tools that allow the creation and co-creation of content in&nbsp;a blockchain. DAFNE+'s research will also focus on the definition of new business models through the distribution of content, allowing creators and users to monetize multimedia creations.&nbsp;</span><br class=\"\" /><br class=\"\" /><span>IRCAM&rsquo;s role in DAFNE+ is in particular to organise a community of artists and technology providers on electronic music and sound. Halfway between IRCAM's Forum software and archives of interactive music/sound repertoire, and based on an autonomous organisation&nbsp;and distributed infrastructure, the platform will enable artists, researchers and engineers to share and monetize pieces of technology for producing music and performing works - libraries, patches, documentations...</span><br class=\"\" /><br class=\"\" /><span>DAFNE+ just started with a phase of gathering user expectations and requirements that will form the platform specification. The purpose of this workshop is to present the project&rsquo;s main objectives and to organise an exchange with the participants on their&nbsp;interest and expectations that will contribute to the platform design.</span></p>",
        "topics": [
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1255,
                "name": "EU project",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18210,
            "forum_user": {
                "id": 18203,
                "user": 18210,
                "first_name": "Hugues",
                "last_name": "Vinet",
                "avatar": "https://forum.ircam.fr/media/avatars/Hugues_Vinet_Portrait2017_large_low.jpg",
                "avatar_url": "/media/cache/4c/92/4c92397e1e69913141f89327eccc6007.jpg",
                "biography": "Hugues Vinet is Director of Innovation and Research Means of IRCAM. He has managed all research, development and innovation activities at IRCAM since 1994. He co-founded and ran for several terms the STMS (Science and Technology of Music and Sound) joint lab with French Ministry of Culture, CNRS and Sorbonne Université. He previously worked at the Groupe de Recherches Musicales of National Institute of Audiovisual in Paris where he managed the research and designed the first versions of the award winning real-time audio processing GRM Tools product. He has coordinated many collaborative R&D projects including recently H2020 VERTIGO in charge of the STARTS Residencies program managing 45 residencies of artists with technological research projects throughout Europe. He is currenty IRCAM's PI for EU MediaFutures project (artistic residencies for innovation in media) and DAFNE+ project dedicated to creatives' communities based on blockchain/NFT/DAO. He also curates the Vertigo Forum art-science yearly symposium at Centre Pompidou. He participates in various bodies of experts in the fields of audio, music, multimedia, information technology and innovation.",
                "date_modified": "2026-02-26T18:55:39.688865+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 417,
                        "forum_user": 18203,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "vinet",
            "first_name": "Hugues",
            "last_name": "Vinet",
            "bookmarks": []
        },
        "slug": "dafne",
        "pk": 1320,
        "published": true,
        "publish_date": "2022-09-09T14:59:26+02:00"
    },
    {
        "title": "ISMM News by Frederic Bevilacqua, Diemo Schwarz, Riccardo Borghesi, Benjamin Matuszewski, Jérôme Nika",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<p><span></span></p>\r\n<p><span><img src=\"https://forum.ircam.fr/media/uploads/como.te.jpg\" alt=\"\" width=\"672\" height=\"1122\" /></span></p>\r\n<p><span></span>Presented by&nbsp;Frederic Bevilacqua, Diemo Schwarz, Riccardo Borghesi, Benjamin Matuszewski, J&eacute;r&ocirc;me Nika</p>\r\n<p><span>We will present new features of the MuBu for Max framework for multimodal analysis of sound and motion, interactive sound synthesis and machine learning, the CataRT and SKataRT corpus-based synthesis tools for Max and Ableton Live, the Gestural Sound Toolkit for the prototyping of gesture&ndash;sound interaction scenarios, and the new version of the Soundworks framework for JavaScript with tutorials.</span><br /><span>We will also show a new series of Max for Live plugin Koral tailored for using movement sensors of smartphone through our application comote, developed in collaboration with the association Arts Convergence, as well as give insights about the current research and developments dealing with the composition of interaction with music synthesis processes.</span></p>",
        "topics": [],
        "user": {
            "pk": 21,
            "forum_user": {
                "id": 21,
                "user": 21,
                "first_name": "Frederic",
                "last_name": "Bevilacqua",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a5c31b02a13ce493dbe36917564770e5?s=120&d=retro",
                "biography": "Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris, in the joint research lab Science & Technology for Music and Sound – IRCAM – CNRS – Sorbonne Université. His research concerns the interaction between movement and sound and the development of gesture-based interactive systems, with applications in performing arts, education and health.\n\nHe holds a MS in physics and a Ph.D. in Biomedical Optics from EPFL. He  studied music at the Berklee College of Music in Boston. From 1999 to 2003 he was a researcher at the Beckman Laser Institute at the University of California Irvine. In 2003 he joined IRCAM as a researcher on gesture analysis for music and performing arts.\n\nHe co-authored more than 150 scientific publications and co-authored 5 patents. He was keynote or invited speaker at several international conferences such as the ACM TEI’13. He was awarded in 2011 the 1st Prize of the Guthman Musical 1st Prize of the Guthman Musical Instrument Competition (Georgia Tech) and received the award “prix ANR du Numérique” from the French National Research Agency (2013).",
                "date_modified": "2026-01-25T21:51:30.597035+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 12,
                        "forum_user": 21,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-17",
                        "type": 0,
                        "keys": [
                            {
                                "id": 270,
                                "membership": 12
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "bevilacq",
            "first_name": "Frederic",
            "last_name": "Bevilacqua",
            "bookmarks": []
        },
        "slug": "ismm-news-by-frederic-bevilacqua-diemo-schwarz-riccardo-borghesi-benjamin-matuszewski-jerome-nika",
        "pk": 3366,
        "published": true,
        "publish_date": "2025-03-20T11:21:18+01:00"
    },
    {
        "title": "Reciter(s) by Po-Hao Chi (C-Lab, Taiwan)",
        "description": "A distributed sound performance using mobile devices, voice assistants, and algorithmically recomposed texts generated with connectivity. Audiences' devices can join via a webpage to become part of a collective recitation shaped by diverse synthetic accents, network latency, and device heterogeneity.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6224d3780ab6994c09e19d2ed593628e.jpeg\" /></p>\r\n<p><em><strong>Reciter(s)</strong></em> is a distributed sound performance composed of mobile devices, voice assistants, and algorithmically recomposed texts. Audiences simply open a webpage on their phones to join a collective recitation shaped by diverse synthetic accents and rhythms, transforming the internet's connectivity, synchronization capabilities, and transmission errors into an audible experience.</p>\r\n<p>The work draws inspiration from Cisco's widely recognized 2000 commercial, \"<em>Empowering the Internet Generation</em>,\" in which children of various ethnicities repeatedly asked, \"Are you ready?\", conveying an optimistic vision of global connectivity and digital empowerment. Today, voice assistants appear to fulfill that borderless promise, but the synthetic voices that once felt novel have become habitual media: embedded, standardized, and teetering on the edge of obsolescence.</p>\r\n<p>The system's architecture routes text through a Max/MSP patch to a cloud server, which distributes fragments via WebSocket to connected browsers. Each device reads the assigned content aloud using its built-in speech engine. As more devices join, differences in hardware, network quality, and vocal character introduce unpredictable shifts in rhythm and alignment. The system foregrounds rather than corrects these deviations: glitches, offsets, and device heterogeneity become performative material.</p>\r\n<p><em><strong>Reciter(s)</strong></em> brings texts, data, and behavioral traces from cyberspace into physical space through distributed recitation, revealing gaps between technological rationality and sensory experience.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0aafb59cceef6214a56a52dbdf060364.jpeg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/cceec85033e8bb17ea262c558ba76075.png\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/261bce0b4fe658526ae4e03de4ffe958.jpeg\" /></p>",
        "topics": [
            {
                "id": 4310,
                "name": "browser-based sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4314,
                "name": "generative system",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4313,
                "name": "network performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4311,
                "name": "participatory art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4312,
                "name": "text-to-speech",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2271,
            "forum_user": {
                "id": 2269,
                "user": 2271,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/efbc2b2e70d5149eae1b63d9ce64b95f?s=120&d=retro",
                "biography": "Po-Hao CHI is an interdisciplinary practitioner from Taiwan who works at the intersection of art, music, and technology. His practice often arises from a fascination with boundaries and guidelines, connecting diversity in daily life — from conceptual to virtual art, software to hardware, and performance to installation. His recent research focuses on agency and the collaborative capacities between humans and artefacts with evolving connectivity. Chi graduated from the MIT Art, Culture, and Technology programme, earned his MMus from Goldsmiths College, and obtained a B.A. in Economics from National Taiwan University.\n\nCHI's works frequently employ sonification approaches to design interactive systems, exploring \"more than human\" issues through technological artefacts. His international residencies include V2 (Netherlands), Laboral (Spain), FACT (U.K.), and Medialab Prado (Spain). He was also awarded the Harold and Arlene Schnitzer Prize in Visual Arts at MIT. Since 2016, he has also participated in theatre productions as a sound designer and composer, with commissions from Macau Art Center, National Kaohsiung Center for the Arts, Taipei Chinese Orchestra, Ju Percussion Group, and o",
                "date_modified": "2026-02-23T17:09:41.521400+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "stu84096",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "reciters-by-po-hao-chi-taiwan-1",
        "pk": 4419,
        "published": true,
        "publish_date": "2026-02-23T17:21:59+01:00"
    },
    {
        "title": "Artaud & AI by Lionel Hubert",
        "description": "“Artaud and AI” stages a radical encounter between the wild, visceral voice of Antonin Artaud and the rational structures of artificial intelligence. Drawing on his 1947 sound recordings, the project uses machine learning to explore the limits of computational logic when exposed to human madness. Each performance is a living experiment, where text, sound, and improvisation collide in real time between man and machine.",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/149e7f4b8b98c47c4100e5ef950ac88b.jpg\" /></p>\r\n<p>The concept of confrontation between madness and the rationality of computational models is at the heart of my project <em>&ldquo;Artaud and AI&rdquo;</em>. I was fascinated by the voice of Antonin Artaud, which I would describe as &ldquo;peculiar, shifting, disturbing, beautiful and hideous at the same time,&rdquo; as well as by the &ldquo;organic, modern, and unreal&rdquo; sounds of his original 1947 recording. This sound, reminiscent of noise music and the works of Japanese artists like Kenji Siratori, and including Artaud&rsquo;s unique and musical glossolalia, immediately captivated me.</p>\r\n<p>Here is how this confrontation is envisioned and implemented in the project:</p>\r\n<ul>\r\n<li>\r\n<p><strong>The Core of the Artistic Approach</strong></p>\r\n<p>◦ The project explicitly aims to confront Artaud&rsquo;s voice &mdash; and by extension, his madness, rage, and violence &mdash; with the algorithmic rationality of computational models. The ambition is to create a <em>new sonic language</em> using Artaud&rsquo;s poetry and recordings as foundation and raw material. It seeks to explore new ways of combining music and poetry, using the computer as a tool to create a unique interaction between text, sound, music, and actor performance in the full version.</p>\r\n</li>\r\n<li>\r\n<p><strong>Artificial Intelligence as a Mirror of Human Creativity</strong></p>\r\n<p>◦ The project explores the boundaries between artificial intelligence and human creativity by <em>diverting the principles of Machine Learning</em> in an innovative and provocative way. Its goal is to establish a dialogue between the <em>algorithmic rationality</em> of the machine and the <em>unpredictability of human artistic expression</em>. Artaud&rsquo;s work, through its modernity, visionary nature, and disturbing quality, is considered perfectly aligned with this idea.</p>\r\n</li>\r\n<li>\r\n<p><strong>Methodological Approaches Based on Machine Learning</strong></p>\r\n<p>◦ The three pillars of Machine Learning are used as lenses to rethink artistic creation in this project.</p>\r\n<p>◦ <em>Supervised learning</em> will serve to reinterpret artistic styles.</p>\r\n<p>◦ <em>Unsupervised learning</em> will be the preferred tool for exploring the unknown, by feeding the models <em>unconventional raw data</em>, particularly the sounds from Artaud&rsquo;s 1947 recording. The goal is to reveal hidden patterns and transform them into abstract compositions. This echoes Artaud&rsquo;s idea of the &ldquo;theatre of cruelty&rdquo;, where dialogue and words lose their meaning to retain only the sensory.</p>\r\n<p>◦ <em>Reinforcement learning</em> will make it possible to create autonomous musical agents capable of interacting in real time with human performers. These virtual entities will learn to improvise, react, and evolve, thus blurring the boundaries between creator and creation.</p>\r\n</li>\r\n<li><strong>The Pach</strong></li>\r\n</ul>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/345a9517b4e416da802cad455818cdab.png\" />&nbsp;</p>\r\n<ul>\r\n<li style=\"list-style-type: none;\">\r\n<ul>\r\n<li>\r\n<p>Currently, I am working on a specific patch for the performance in Liepaga. The patch presented here, from which it will be derived and modified for the presentation, currently consists of four modules:</p>\r\n<ul>\r\n<li>The first module utilizes RAVE and the nn~ object. Two models are employed:</li>\r\n</ul>\r\n<ul>\r\n<li>The first model is a rapid and concise &laquo;&nbsp;training&nbsp;&raquo; of Artaud&rsquo;s own voice. Regrettably, I lacked the time to complete the training, but the resulting sound and color were entirely satisfactory.</li>\r\n</ul>\r\n<ul>\r\n<li style=\"list-style-type: none;\">\r\n<ul>\r\n<li>The second model is the RAVE2 model, which I believe was trained with orchestral sounds.</li>\r\n</ul>\r\n</li>\r\n<li>An XY slider enables real-time influence on the generation.</li>\r\n<li>The output of this second model is routed to Somax2, which has been trained with all the more or less sung parts by Artaud in the original 1947 recording. I intend to incorporate a second agent in the subsequent version.</li>\r\n<li>The second module is dedicated to Dicy2, which generates piano and voice sequences utilizing the text and voice of Antonin Artaud, as well as real-time instrumental performance.</li>\r\n<li>Modules 3 and 4 employ concatenative synthesis objects from Rodrigo Constanzo&rsquo;s &laquo;&nbsp;Data Knot&nbsp;&raquo; package. However, I am considering replacing module 3 with another Somax agent utilizing Prosax by Mikhail Malt and the REACH team.</li>\r\n</ul>\r\n</li>\r\n</ul>\r\n</li>\r\n</ul>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/537426d65e13b9c1e4392fe99be08d9a.png\" /></p>\r\n<ul>\r\n<li>\r\n<p><strong>Fundamental Questions Raised by the Confrontation</strong></p>\r\n<p>◦ The project seeks to answer two essential questions:</p>\r\n</li>\r\n</ul>\r\n<p><em>- What understanding can artificial intelligence have of human madness?</em> By confronting algorithms with &ldquo;irrational&rdquo; or chaotic data, the project aims to shed light on the limits of computer logic when faced with the complexity of the human mind.</p>\r\n<p><em>- How does the rationality of the machine oppose the irrationality of human behavior and thought?</em> By designing situations in which AI must interpret or respond to unpredictable human behaviors, it becomes possible to explore the contrasts and potential synergies between these two forms of intelligence.</p>\r\n<ul>\r\n<li>\r\n<p><strong>Performance as an Embodiment of the Confrontation</strong></p>\r\n<p>◦ On stage, the initial scenography proposes a <em>minimalist and decrepit Artaud&rsquo;s room</em> for the narrator, contrasting with a <em>sterile and white environment</em> for the musician and the computer, representing the &ldquo;MACHINE or the subconscious of Antonin Artaud&rdquo;.</p>\r\n<p>For a solo performance, I take on the role of the musician interacting directly with the &ldquo;Machine&rdquo;. The process is dynamic and non-preprogrammed, except for the text of <em>&ldquo;To Have Done with the Judgment of God&rdquo;</em>, and various sound corpora that will serve as fixed data. Each performance will be unique, with a changing &ldquo;soundtrack&rdquo; resulting from this real-time interaction. The sonic universe of the show will function like a trio between the spoken text, the musician&rsquo;s interventions, and the machine&rsquo;s &ldquo;proposals&rdquo;. This ongoing dialogue between human play and AI-generated responses illustrates the confrontation between Artaud&rsquo;s unpredictably mad spirit and the rationality of the algorithm.</p>\r\n</li>\r\n<li>\r\n<p><strong>Objectives and Stakes</strong></p>\r\n<p>The artistic approach aims to be both critical and constructive, seeking to <em>demystify AI</em> by exposing its limitations and potential in a tangible and accessible way. The goal is to encourage reflection on our relationship to emerging technologies and their place in the creative process, and to open up new paths of artistic expression at the intersection of human and machine.</p>\r\n</li>\r\n</ul>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1036,
                "name": "DICY2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4555,
            "forum_user": {
                "id": 4552,
                "user": 4555,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/portrait_-_1.png",
                "avatar_url": "/media/cache/e5/64/e5649baacfc2d34d98d33a11c4fa5768.jpg",
                "biography": "Hi! My name is Lionel Hubert from Paris, Also known as Kalikay, I am a French musician, composer, and sound artist exploring diverse musical styles for over 35 years. From rock guitar to improvised music, electronic glitches, and noise, I embrace creative freedom.\n\nFor 23 years, I’ve collaborated closely with Charlemagne Palestine, contributing to music, sound design, technology, and visual projects. My sound installations connect music, space, and technology, with works showcased at Musée des Arts Décoratifs de Paris (Mon ours en peluche) and an upcoming project in Maastricht focusing on Grammeer, a 1550 bell.\n\nCurrently engaged in machine learning and AI-driven sound design, I collaborate through the IRCAM Forum. I’m also leading \"ARTAUD et l’IA,\" supported by Art Zoyd Studio, SACEM, and Phonurgia Nova, confronting AI with Antonin Artaud’s poetic intensity.",
                "date_modified": "2026-03-02T08:47:43.893438+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 123,
                        "forum_user": 4552,
                        "date_start": "2023-02-16",
                        "date_end": "2025-11-27",
                        "type": 0,
                        "keys": [
                            {
                                "id": 652,
                                "membership": 123
                            },
                            {
                                "id": 772,
                                "membership": 123
                            },
                            {
                                "id": 908,
                                "membership": 123
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "kalikay",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 93,
                    "user": 4555,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "artaud-ai",
        "pk": 3588,
        "published": true,
        "publish_date": "2025-07-30T14:40:14+02:00"
    },
    {
        "title": "Real-time simulations of nonlinear vibrations by Thomas Risse",
        "description": "This work presents a series of MAX/MSP patches based on simulations of a vibrating string. Nonlinear vibrations are taken into account, resulting in an amplitude dependent behaviour. Interaction with a bow is also shown.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<h2>Introduction</h2>\r\n<p>Physical modelling for sound synthesis has a long history. Classical methods for representing musical resonators include digital waveguides or modal synthesis. Musical instruments are often represented by the coupling of a linear resonator with a local nonlinear exciter, such as a bowing mechanism, vibrating lips or reeds. Software such as Modalys, developed at IRCAM, are tailored for representing such assemblies. However, they are in general not suitable for the simulation of instruments where the resonator itself includes a nonlinearity.</p>\r\n<p>With recently developed numerical methods, it has become possible to handle nonlinear resonator effects in real-time (meaning that the algorithms are fast enough and robust enough). As an evidence of this statement, we developed a MAX/MSP external dedicated to the simulation of a nonlinear vibrating string, that we present in this article.</p>\r\n<p>&nbsp;</p>\r\n<h2>Interface</h2>\r\n<p><img alt=\"Max/MSP interface\" src=\"https://forum.ircam.fr/media/uploads/user/d3c1ccc749a8c17d9ad7d96a82d7e59b.png\" /></p>\r\n<p>The interface is built around the object 1dSAV.CubicString (hidden in the presentation mode). The string is excited by a force, defined in the \"Excitation type\" section and applied pointwise at the Excitation position. The outputs signals correponds to the velocity of the string at two different points along the string. The pysical parameters of the string are defined through a set of higher level perceptive parameters (fundamental frequency, inharmonicity and decay times). The regularisation parameter and stability condition setting are parameters ffrom the algorithm which can be left as is in most of the cases.&nbsp;</p>\r\n<p>Depending on the inharmonicity and decay time settings, sounds range from string-like to bell-like.</p>\r\n<p>&nbsp;</p>\r\n<h2>Polyphony for keyboard-like use</h2>\r\n<h2><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d95d32b62c34ad4aee90b3499611d3b0.png\" /></h2>\r\n<p>A polyphonic patch allows playing an ensemble of strings in a keyboard-like configuration.</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 214,
                "name": "Physical Modeling Engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 11075,
            "forum_user": {
                "id": 11072,
                "user": 11075,
                "first_name": "Thomas",
                "last_name": "Risse",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d06c51cefa8a169a7683c6450d0ab981?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-17T15:12:36.308173+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Risse",
            "first_name": "Thomas",
            "last_name": "Risse",
            "bookmarks": []
        },
        "slug": "real-time-simulations-of-nonlinear-vibrations-by-thomas-risse",
        "pk": 4318,
        "published": true,
        "publish_date": "2026-02-05T11:55:02+01:00"
    },
    {
        "title": "how to download Panoramix for free?",
        "description": "I have created an account, but the free download link keep link \"Share with the community, take advantage of our free software\", is confusing. \nHow should I do to download the software?\nthanks! ",
        "content": "",
        "topics": [],
        "user": {
            "pk": 26276,
            "forum_user": {
                "id": 26249,
                "user": 26276,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e6468810e31ba871ed191a1d0c5aa934?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sleepycat",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "how-to-download-panoramix-for-free",
        "pk": 1023,
        "published": false,
        "publish_date": "2022-01-04T12:51:58.550671+01:00"
    },
    {
        "title": "Elliptique: a new multichannel reverberator for SPAT - Benoit Alary, Etienne Démoulin",
        "description": "Presented during the IRCAM Forum Workshops in March 2023.",
        "content": "<p>Nowadays, traditional artificial reverberation systems can feel quite limited when using a complex reproduction system with many loudspeakers or spatial audio formats (binaural, ambisonics, WFS, &hellip;). From enhancing the natural reverberation of a room during a live performance to creating unique virtual sonic environments in virtual and augmented reality, being able to control the reverberation is always key. In the next release of SPAT, we are introducing a new multichannel reverberator named Elliptique. With Elliptique, you can create several reverberation areas in space, each using different reverberation properties. These areas will adapt to the output loudspeaker layout and format to produce anything from natural-sounding reverberation, to completely abstract acoustic environments.&nbsp;</p>\r\n<p>In this presentation, Benoit Alary (researcher, IRCAM/EAC) and &Eacute;tienne D&eacute;moulin&nbsp;(computer music designer, IRCAM) will go over the essential aspects of working with this new Max external and take an in-depth dive into how Elliptique was recently used to create complex spatial reverberation during a live performance in Ircam&rsquo;s &ldquo;Espace de projection&rdquo;. With this new reverberator, we want to explore fresh paradigms for creating rich spatial reverberation that can continue to evolve as we discover new creative use for it.</p>",
        "topics": [
            {
                "id": 1130,
                "name": "Ateliers du Forum Paris 2023",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 403,
                "name": "Reverberation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 24564,
            "forum_user": {
                "id": 24537,
                "user": 24564,
                "first_name": "Benoit",
                "last_name": "Alary",
                "avatar": "https://forum.ircam.fr/media/avatars/BA_2021_06.jpg",
                "avatar_url": "/media/cache/27/b3/27b31b6ef7aaf23499bed29603125e56.jpg",
                "biography": "Benoit Alary is a researcher in the Acoustic and Cognitive Spaces team of the STMS lab, part of IRCAM. He has over fifteen years of experience in immersive audio, shared between industry and academia, including a Ph.D. in acoustics and signal processing from Aalto University (Finland) and an MSc from the University of Edinburgh. His research centers around sound reproduction, analysis/synthesis, and perception. His current projects involve artificial reverberation, 6DoF sound reproduction, machine learning, and virtual acoustics.",
                "date_modified": "2025-11-07T10:18:43.509252+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 317,
                        "forum_user": 24537,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-07",
                        "type": 0,
                        "keys": [
                            {
                                "id": 566,
                                "membership": 317
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "balary",
            "first_name": "Benoit",
            "last_name": "Alary",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3070,
                    "user": 24564,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "elliptique-a-new-multichannel-reverberator-for-spat",
        "pk": 2043,
        "published": true,
        "publish_date": "2023-03-13T15:55:35+01:00"
    },
    {
        "title": "Magic Piano AR - Grégoire Lemoulant",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>Like guitar hero but on a real piano ! Follow and tap the notes coming directly on your piano.<br />No background in music theory required</p>\r\n<p>&bull;Access famous piano songs available in the app<br />&bull;Choose your tempo<br />&bull;Import any piano midi file<br />&bull;Watch your hands, the keyboard and the notes coming in AR at the same time</p>\r\n<p><a href=\"https://drive.google.com/file/d/1fPrZUI6ha5iF68NSpNDXvywe8_pU3p_G/view?usp=sharing\">Demo video&nbsp;</a></p>",
        "topics": [],
        "user": {
            "pk": 37422,
            "forum_user": {
                "id": 37372,
                "user": 37422,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d90154cec84d775b0ae515cc24130d42?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "glemoulant",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "magic-piano-ar",
        "pk": 2097,
        "published": true,
        "publish_date": "2023-02-28T17:21:15+01:00"
    },
    {
        "title": "Tutoriels Omax5",
        "description": "Cette page rassemble des tutoriels vidéo sur Omax5.",
        "content": "<h2><img src=\"/media/uploads/images/ban_omax.png\" alt=\"\" width=\"767\" height=\"312\" /></h2>\r\n<h1>Tutorials</h1>\r\n<p>OMax is a generative music program that (co-)improvises using material from pre-recorded scores or live human contributions. Born around twenty years ago, It uses a data structure known as the factor oracle to improvise in a style similar as the style of input. It realises a free improvisation from the input, on the fly. And creates a multiple layer of relationships with factor oracles built on different descriptors.<br /><br /></p>\r\n<p><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/nQPoRYKUpu8?si=I95d71eAypCpt-ja\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p></p>\r\n<p><iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/IcIV3iwnpuQ?si=jhxpusX2mpoYnTzK\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h2>________________</h2>\r\n<div>\r\n<p dir=\"auto\">OMax5.x (c) Ircam, CAMS (EHESS) 2009-2024</p>\r\n<p dir=\"auto\">OMax was created by<span>&nbsp;</span><a href=\"https://www.ircam.fr/person/gerard-assayag\" rel=\"nofollow\">G&eacute;rard Assayag</a>,<span>&nbsp;</span><a href=\"https://www.ehess.fr/fr/personne/marc-chemillier\" rel=\"nofollow\">Marc Chemillier</a>,<span>&nbsp;</span><a href=\"http://repmus.ircam.fr/bloch\" rel=\"nofollow\">Georges Bloch</a><span>&nbsp;</span>in collaboration with<span>&nbsp;</span><a href=\"https://music-cms.ucsd.edu/people/faculty/regular_faculty/shlomo-dubnov/index.html\" rel=\"nofollow\">Shlomo Dubnov</a>.</p>\r\n<p dir=\"auto\">OMax5 was developed by [Benjamin Levy]</p>\r\n<p dir=\"auto\">version OMax5.5 is by<span>&nbsp;</span><a href=\"http://repmus.ircam.fr/bloch\" rel=\"nofollow\">Georges Bloch</a>, with the help of<span>&nbsp;</span><a href=\"https://www.ircam.fr/person/mikhail-malt\" rel=\"nofollow\">Mikhail Malt</a>,<span>&nbsp;</span><a href=\"https://www.ircam.fr/person/marco-fiorini\" rel=\"nofollow\">Marco Fiorini</a></p>\r\n<p dir=\"auto\"><a href=\"https://www.ircam.fr/projects/pages/reach-project\" rel=\"nofollow\">IRCAM REACH Project</a><span>&nbsp;</span><a href=\"http://repmus.ircam.fr/home\" rel=\"nofollow\">IRCAM Musical Representations Team</a></p>\r\n<p dir=\"auto\">OMax5.x is part of the research project ERC REACH (Raising Co-creativity in Cyber-Human Musicianship) directed by G&eacute;rard Assayag.</p>\r\n</div>",
        "topics": [
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2788,
                "name": "Improvisation, generativity and co-creative interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2184,
                "name": "RepMus",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 106,
                "name": "Software",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1647,
                "name": "Technologies Ircam Free",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "omax5-tutorials",
        "pk": 3422,
        "published": true,
        "publish_date": "2025-05-15T17:18:43+02:00"
    },
    {
        "title": "The Gods are Calling by Shravni Sangamnerkar and Tanya Chaturvedi",
        "description": "Shravni Sangamnerkar and Tanya Chaturvedi are 25-year-old visual/sound explorers, storytellers, cross-cultural collaboration enthusiasts, and occasional exaggerative narrators based between India and the United Kingdom. On their journey of self-discovery and evolution, their interests lie in telling stories of deep-rooted Indian culture and its position relative to the world. Their passion for the exuberant nuances of their heritage and social order is reflected in their practice of invoking thought through technology in audiences. Shravni and Tanya are also patrons of experimental music and storytelling. Coming from diverse backgrounds in Design and Engineering, they went on to pursue master’s degrees in Digital Direction at the Royal College of Art, focusing on the future of storytelling and how culture and history can be documented responsibly through technology.",
        "content": "<h2></h2>\r\n<h2><img alt=\"Visuals: Through our approach of unifying machines and man to re-discover God\" src=\"https://forum.ircam.fr/media/uploads/user/0f089b167f5041dcb43c4c714157522e.png\" /></h2>\r\n<p>&nbsp;</p>\r\n<p>Concept: Through our approach of unifying machines and humans to re-discover God, we propose a simple experiment from the lens of Hinduism (being active practitioners ourselves), positing that the sound of music is elemental. We aim to reframe our relationship with the technological future, not as a binary, but as a triad: Man-Machine-God. We've explored the divine through mantras and mudras, with future plans to collaborate with various religious teachings and find common threads. Consider the profound parallels: Our cloud servers, vast and omniscient, holding the sum of human knowledge&mdash;are they not the modern manifestation of the Akashic records, the cosmic chronicles? These are the building blocks of recognising the eternal patterns that have always existed. Within the vast repository of human experience, every sacred text, every circuit board, and every line of code has its place. We will work with 108 (a holy number in Hinduism) sounds&mdash;from the ancient tanpura to the modern malls. This auditory exploration challenges us to rediscover, to perceive with our ears what our eyes cannot see, similar to the buildup of a beat and the drop. We explore these cycles through AI, traversing the cycles of time&mdash;the Kaal Chakra&mdash;from the cacophony of Kaliyuga to the harmony of Satyuga (time markers from Hindu scriptures). Identifying our current scenario (Kaliyuga) with the hope of ascending towards light. Our task is to reduce the noise&mdash;not just auditory, but spiritual&mdash;that interferes with our connection to the good (or God) within us.\u2028In this \"City of Gods,\" where the beep of a processor is indistinguishable from the Om of the universe, the gods are calling. Will you answer?</p>\r\n<p>Collaboration: As two individuals of the same belief across the world, we are combining our sounds with our culture.</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 2253,
                "name": "calling",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2255,
                "name": "culture",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2254,
                "name": "lineage",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 39705,
            "forum_user": {
                "id": 39651,
                "user": 39705,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/6B55B6F1-E7EA-4D11-8E85-E11CE11F02C0.JPEG",
                "avatar_url": "/media/cache/e9/ed/e9ed9fb13c604570462c5b9135d3f670.jpg",
                "biography": "Informed by Shravni and Tanya’s Indian heritage, delving into discoveries in shared childhood memories they lean on to the classic of ‘Kabir ke dohe (tr: Kabir’s verses)’, reiterating cross-culture existence, for interpretation. With diverse languages and dialects, the message remains the same. In order to celebrate the love and reciprocal love and virtues that binds all humankind. We wish to use this opportunity to take two simultaneous approaches to identify the emotion of Doha as we weave them together to send across ‘unity in diversity’ as a message.",
                "date_modified": "2024-11-06T20:21:16.248295+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 953,
                        "forum_user": 39651,
                        "date_start": "2024-10-07",
                        "date_end": "2025-10-07",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "shoonyaa",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3019,
                    "user": 39705,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "the-gods-are-calling",
        "pk": 3019,
        "published": true,
        "publish_date": "2024-10-07T07:30:22+02:00"
    },
    {
        "title": "ASAP - Origins, Developments and Prospects by Pierre Guillot",
        "description": "",
        "content": "<p><span>In this talk, Pierre Guillot will give a brief introduction to the historical heritage and artistic and research context in which ASAP is developed, highlighting the challenges and innovative nature of the project. We will then present the possibilities offered by this suite of tools and discuss the prospects for further developments and improvements.&nbsp;ASAP is a set of audio plug-ins that allows creatively transforming sound. You are invited to play with the sound representation and the synthesis parameters to generate new sounds. The plug-ins can also be used to correct the defaults of the sound and to improve audio rendering. Thanks to the ARA2 integration, the spectral transformations are integrated into your editing workflow.</span></p>\r\n<p><span><span>More info :&nbsp;</span><a href=\"https://forum.ircam.fr/projects/detail/asap/\">https://forum.ircam.fr/projects/detail/asap</a></span></p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/image_asap.png\" alt=\"\" width=\"878\" height=\"494\" /></p>",
        "topics": [],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "asap-origins-developments-and-prospects-by-pierre-guillot",
        "pk": 3078,
        "published": true,
        "publish_date": "2024-10-25T11:22:36+02:00"
    },
    {
        "title": "Paint a Raga by Nikita Raina",
        "description": "\"Paint a Raga\" is an interactive XR installation bridges Hindustani and Western classical music, allowing users to play Indian ragas on a Western keyboard while displaying the corresponding Indian notation, fostering cross-cultural musical understanding between India and the West.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p>Presented by: Nikita Raina</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/nikitaraina13/\" target=\"_blank\">Biography</a></p>\r\n<p><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8c2ea80783ec170d7a0adddcbae3435b.jpg\" /><br /><br />As an Indian-American musician primarily exposed to Western music theory, Nikita began studying Hindustani classical music to connect more deeply with her roots. Struggling to understand its theory and notation, she explored various tutorials and forums where musicians attempted to translate between Hindustani and Western music concepts. However, she found no comprehensive online or digital tools that effectively addressed this gap.</p>\r\n<p>Hindustani and Western classical music exist in separate hemispheres, with minimal overlap in education. This lack of integration creates a learning barrier for musicians from both traditions, limiting their access to and understanding of each other&rsquo;s theoretical frameworks and notation. Currently, no digital tools adequately bridge this divide.</p>\r\n<p>In collaboration with Hindustani classically trained musicians, Nikita's research explores the use of XR technology to connect North Indian and Western classical musicians. Her ongoing project is an interactive installation that teaches users to play notes within an Indian raga&mdash;a melodic framework with specific ascending and descending scales&mdash;on a Western keyboard while displaying the equivalent Indian notation on screen. Using two MIDI keyboards connected to a sitar and grand piano, users create their own Indian-Western reactive musical artwork. By transforming music into a visual and auditory experience, the installation fosters cross-cultural understanding, bridging India and the West.<br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/37768747864d1de5f2797c136ee13717.png\" /></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 89096,
            "forum_user": {
                "id": 88989,
                "user": 89096,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_5906_2_z8ou9oP.JPG",
                "avatar_url": "/media/cache/9f/9f/9f9fd0ac60ea9f6fd80e8e3fe24b9b26.jpg",
                "biography": "Nikita Raina is an Atlanta-London-based musician, engineer, and creative technologist with an MA in Immersive Technologies & Storytelling from the Royal College of Art and a BS in Industrial Engineering from Georgia Tech. Her multidisciplinary background motivates her keen interest in exploring innovative methods of intersecting art, design, and music with engineering and technology.\n\nBeing an Indian-American musician composing and writing music in three different languages and three different instruments, Nikita views music as a universal language that connects humans across the globe. Her current research observes a barrier for learning and accessing different genres of music from other cultures, particularly North Indian classical music. She seeks to use technology as a facilitator between different cultural styles of music, creating a more inclusive space.\n\nHer work has been exhibited internationally at venues including Ars Electronica Festival and Cromwell Place gallery, showcasing her commitment to blending creativity with emerging technologies.",
                "date_modified": "2025-02-27T08:01:48.004356+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nikitaraina13",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "paint-a-raga-by-nikita-raina",
        "pk": 3309,
        "published": true,
        "publish_date": "2025-02-26T06:04:01+01:00"
    },
    {
        "title": "Trajectory Score Library - Nadir BABOURI",
        "description": "This presentation has been cancelled.",
        "content": "<p>Trajectory Score Library is an Antescofo library of scripts, converting mathematical parametric functions into curves in order to control the Spat5 sources trajectories. However, the data produced by the trajectories can be used for different purposes like synthesis control, audiovisuals, etc.</p>\r\n<p></p>\r\n<p>Trajectory Score Library scripts are written in the Antescofo language. The language provides a convenient way of creating and transmitting automation parameters through the Open Sound Control protocol. Trajectory Score Library aims to use Max or Pure Data as a unified real-time environment to unite different computer music processes. Bindings with <a href=\"http://forumnet.ircam.fr/fr/produit/spat/\">Spat5</a> and <a href=\"http://gris.musique.umontreal.ca/\">SpatGris3</a> are available.</p>\r\n<ul>\r\n<li>\r\n<p>The Max an PD patches examples and the Antescofo code can be accessed as a <a href=\"https://github.com/nadirB/Trajectory_Score_Library\">GitHub project</a> and the releases can be downloaded as a <a href=\"https://github.com/nadirB/Trajectory_Score_Library/releases\">zip file</a>.</p>\r\n</li>\r\n<li>\r\n<p><a href=\"https://creaa.unistra.fr/websites/gream/Activites/Colloque_JIM_2020_-_Pre-actes_-_BABOURI_Nadir.pdf\">A paper presenting the Trajectory Score Library</a> is available.</p>\r\n</li>\r\n</ul>",
        "topics": [
            {
                "id": 119,
                "name": "Antescofo",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 46,
                "name": "Antescofo language",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 886,
                "name": "pure data",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1169,
                "name": "SpatGris",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 26,
            "forum_user": {
                "id": 26,
                "user": 26,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Acousmatic_Miniature_1.jpg",
                "avatar_url": "/media/cache/e4/a3/e4a33a726757791da7c0210ad665a60f.jpg",
                "biography": "Membre actif du Forum Ircam et utilisateur des logiciels de l’institut où il a été formé par Alexis Baskind (Spatialisateur), Jean\nLochard (Audiosculpt), Mikhail Malt (Open Music) , Benjamin Thigpen (Max), Nicolas Misdariis (Sound Design).\nNadir Babouri is an active member of IRCAM Forum and a user of IRCAM's softwares. He studied with Alexis Baskind (Spatialisateur), Jean Lochard (Audiosculpt), Mikhail Malt (Open Music), Benjamin Thigpen (MaxMsp), Nicolas Misdariis (Sound Design) and Jean-Louis Giavitto (Antescofo Language)",
                "date_modified": "2025-04-15T10:27:19.119235+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nadir-b",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "trajectory-score-library",
        "pk": 2073,
        "published": true,
        "publish_date": "2023-02-18T20:25:21+01:00"
    },
    {
        "title": "Learn Numerology Course Online Step by Step",
        "description": "Join a numerology course online to learn numbers, improve life decisions, and earn certification. Start your journey with expert guidance today.",
        "content": "<p style=\"text-align: justify;\"><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a1126d09f2bc8cab3dd5e102123332b1.png\"></span></p>\n<p><span style=\"\">In today&rsquo;s fast-moving world, many people are looking for simple ways to understand themselves better and improve their lives. One such powerful and easy-to-learn method is numerology. If you&rsquo;ve ever been curious about what your birth date or name says about you, then learning numerology can be exciting and useful.</span></p>\n<p><span style=\"\">With the help of a </span><a href=\"https://bivs.com/numerology-course/\"><strong>numerology course online</strong></a><span style=\"\">, you can now learn everything from the comfort of your home. Let&rsquo;s explore how this works and why it is becoming so popular.</span></p>\n<h2><strong>What is Numerology?</strong></h2>\n<p><span style=\"\">Numerology is the study of numbers and how they affect our life. It is based on the idea that every number has a special meaning and energy. These numbers can tell us about our personality, career, relationships, and even future opportunities.</span></p>\n<p><span style=\"\">For example:</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Your birth date can reveal your life path</span></li>\n<li style=\"\"><span style=\"\">Your name can show your strengths and weaknesses</span></li>\n</ul>\n<p><span style=\"\">Learning numerology is like learning a new language&mdash;but a very simple one!</span></p>\n<h2><strong>Why Choose a Numerology Course Online?</strong></h2>\n<p><span style=\"\">Today, most students prefer learning online because it is easy and flexible. A </span><strong>numerology course online</strong><span style=\"\"> allows you to study anytime, anywhere.</span></p>\n<h3><strong>Key Benefits:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Learn at your own pace</span></li>\n<li style=\"\"><span style=\"\">No need to travel</span></li>\n<li style=\"\"><span style=\"\">Access to expert teachers</span></li>\n<li style=\"\"><span style=\"\">Recorded classes for revision</span></li>\n<li style=\"\"><span style=\"\">Affordable compared to offline courses</span></li>\n</ul>\n<p><span style=\"\">Whether you are a student, working professional, or homemaker, online learning fits into your schedule easily.</span></p>\n<h2><strong>What Will You Learn in Numerology Courses?</strong></h2>\n<p><span style=\"\">When you join </span><strong>numerology courses</strong><span style=\"\">, you will start from basics and slowly move to advanced concepts.</span></p>\n<h3><strong>Topics Covered:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Introduction to numbers and meanings</span></li>\n<li style=\"\"><span style=\"\">Life path number calculation</span></li>\n<li style=\"\"><span style=\"\">Destiny and soul numbers</span></li>\n<li style=\"\"><span style=\"\">Name correction techniques</span></li>\n<li style=\"\"><span style=\"\">Relationship compatibility</span></li>\n<li style=\"\"><span style=\"\">Career guidance through numbers</span></li>\n</ul>\n<p><span style=\"\">These skills can help you guide yourself and even help others.</span></p>\n<h2><strong>Who Should Learn Numerology?</strong></h2>\n<p><span style=\"\">The best part about numerology is that anyone can learn it. You don&rsquo;t need any special background or degree.</span></p>\n<h3><strong>Ideal for:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Students curious about future</span></li>\n<li style=\"\"><span style=\"\">People interested in astrology and Vastu</span></li>\n<li style=\"\"><span style=\"\">Professionals looking for a side career</span></li>\n<li style=\"\"><span style=\"\">Anyone who wants self-growth</span></li>\n</ul>\n<p><span style=\"\">If you enjoy learning new things, then this course is perfect for you.</span></p>\n<h2><strong>Online Numerology Certification: Why It Matters</strong></h2>\n<p><span style=\"\">When you complete an </span><strong>online numerology certification</strong><span style=\"\">, you get proof of your knowledge and skills. This helps you build trust if you want to start your own practice.</span></p>\n<h3><strong>Benefits of Certification:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Adds credibility</span></li>\n<li style=\"\"><span style=\"\">Helps you get clients</span></li>\n<li style=\"\"><span style=\"\">Builds confidence</span></li>\n<li style=\"\"><span style=\"\">Opens career opportunities</span></li>\n</ul>\n<p><span style=\"\">Many people today are earning by offering numerology consultations online.</span></p>\n<h2><strong>How to Choose the Right Numerology Online Course?</strong></h2>\n<p><span style=\"\">There are many </span><strong>numerology courses online</strong><span style=\"\">, so choosing the right one is important.</span></p>\n<h3><strong>Look for:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Experienced teachers</span></li>\n<li style=\"\"><span style=\"\">Clear course structure</span></li>\n<li style=\"\"><span style=\"\">Practical training</span></li>\n<li style=\"\"><span style=\"\">Certification included</span></li>\n<li style=\"\"><span style=\"\">Good reviews</span></li>\n</ul>\n<p><span style=\"\">One trusted name in this field is </span><a href=\"https://bivs.com/\"><strong>Bhartiya Institute of Vedic Science</strong></a><span style=\"\">. They offer structured and easy-to-understand courses for beginners as well as advanced learners.</span></p>\n<h2><strong>Combine Numerology with Vastu for Better Results</strong></h2>\n<p><span style=\"\">Many experts suggest learning numerology along with Vastu. Both sciences work together to improve your life.</span></p>\n<h3><strong>Benefits of Learning Both:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Better understanding of energy and space</span></li>\n<li style=\"\"><span style=\"\">Improved home and office harmony</span></li>\n<li style=\"\"><span style=\"\">Stronger predictions and guidance</span></li>\n</ul>\n<p><span style=\"\">You can also explore a </span><a href=\"https://bivs.com/vastu-course\"><strong>Vastu Course</strong></a><span style=\"\"> or </span><strong>Online Vastu Courses</strong><span style=\"\"> along with your numerology studies to expand your knowledge.</span></p>\n<h2><strong>Career Opportunities After Learning Numerology</strong></h2>\n<p><span style=\"\">Once you </span><strong>learn numerology</strong><span style=\"\">, many opportunities open up for you.</span></p>\n<h3><strong>Career Options:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Numerology consultant</span></li>\n<li style=\"\"><span style=\"\">Life coach</span></li>\n<li style=\"\"><span style=\"\">Spiritual advisor</span></li>\n<li style=\"\"><span style=\"\">Content creator (blogs, YouTube)</span></li>\n<li style=\"\"><span style=\"\">Freelancer</span></li>\n</ul>\n<p><span style=\"\">With the rise of digital platforms, you can easily start your own online consultation business.</span></p>\n<h2><strong>Why Learning Numerology is a Smart Choice Today</strong></h2>\n<p><span style=\"\">In today&rsquo;s stressful life, people are always looking for guidance and clarity. Numerology provides simple answers using numbers, making it easy to understand.</span></p>\n<h3><strong>Reasons to Learn:</strong></h3>\n<ul>\n<li style=\"\"><span style=\"\">Helps in decision making</span></li>\n<li style=\"\"><span style=\"\">Improves self-awareness</span></li>\n<li style=\"\"><span style=\"\">Guides career and relationships</span></li>\n<li style=\"\"><span style=\"\">Can become a source of income</span></li>\n</ul>\n<p><span style=\"\">A </span><strong>numerology online course</strong><span style=\"\"> gives you both knowledge and practical skills.</span></p>\n<h2><strong>Final Thoughts</strong></h2>\n<p><span style=\"\">Starting a </span><strong>numerology course online</strong><span style=\"\"> is one of the easiest ways to enter the world of spiritual science. It is simple, interesting, and useful in daily life.</span></p>\n<p><span style=\"\">With the right guidance from institutes like Bhartiya Institute of Vedic Science, you can learn step by step and even turn it into a successful career.</span></p>",
        "topics": [
            {
                "id": 4546,
                "name": "Bhartiya Institute of Vedic Science",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4547,
                "name": "Numerology Course",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4545,
                "name": "Numerology Course Online",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4548,
                "name": "vastu course",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166415,
            "forum_user": {
                "id": 166178,
                "user": 166415,
                "first_name": "Bhartiya Institute",
                "last_name": "of Vedic Science",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/68bd1a3f4500f3db7f9e3b50dc43d197?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-02T10:46:33.893577+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "aditibivs",
            "first_name": "Bhartiya Institute",
            "last_name": "of Vedic Science",
            "bookmarks": []
        },
        "slug": "numerology-course-online-a-simple-guide-to-learn-the-power-of-numbers",
        "pk": 4581,
        "published": false,
        "publish_date": "2026-04-02T10:49:42.174971+02:00"
    },
    {
        "title": "Creative Sound Transformation and Analysis using ASAP & Partiels by Pierre Guillot",
        "description": "ASAP is a collection of audio plug-ins that allows creatively transforming sound. You are invited to play with\r\n\r\nthe sound representation and the synthesis parameters to generate new sounds. The plug-ins can also be\r\n\r\nused to correct the defaults of the sound and to improve audio rendering. Thanks to the ARA2 integration,\r\n\r\nthe spectral transformations are integrated into your editing workflow.",
        "content": "<p>In this talk, Pierre Guillot will give a brief introduction to the historical heritage and artistic and research context in which ASAP plug-ins were developed, highlighting the challenges and innovative nature of the&nbsp;project. He will present the functionalities offered by the ASAP collection, and in particular the plug-ins based on ARA2 technology. The Psycho Filter plug-in lets you draw shape filters on the sound spectrogram and control their gain and fade. The sound representation and user interface enable you to create highly complex and precise surface filters to reduce or enhance specific parts of the sound's spectral components, to compensate for annoying artifacts in the sound, to isolate certain specificities of the sound and to creatively transform the sound. The Pitches Brew plugin lets you transpose the pitch and formant of sounds by drawing and modifying their frequency curves. Beyond the exceptional quality of the processing, the plugin offers a visual representation of the original fundamental frequencies, expected pitches, and formants with curves enabling numerous original edits such as redrawing, transposing, stretching, copying, etc.</p>",
        "topics": [],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "creative-sound-transformation-and-analysis-using-asap-partiels-by-pierre-guillot",
        "pk": 3075,
        "published": true,
        "publish_date": "2024-10-25T09:29:18+02:00"
    },
    {
        "title": "Polyphonic – Reinventing Real-Time Multichannel Audio Processing with FPGA Technology by Maxime Popoff",
        "description": "Polyphonic unlocks high-performance, low-latency multichannel audio processing, making FPGA technology accessible like never before. By bridging the gap between raw computing power and creative applications, our platform simplifies the development of spatial and interactive audio systems.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><span><img src=\"/media/uploads/wfs_polyphonic.jpg\" alt=\"\" width=\"1116\" height=\"744\" /></span></p>\r\n<p><span>Presented by : Maxime Popoff</span></p>\r\n<p><a href=\"https://forum.ircam.fr/profile/mpopoff/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>Polyphonic simplifies the design of interactive and immersive audio systems by making high-performance computing, once reserved for experts, accessible to all.</p>\r\n<p><span>Through a programmable board that seamlessly integrates with existing audio infrastructures and a dedicated software environment that streamlines programming, our solution enables the development of audio systems capable of handling hundreds of channels with imperceptible latency, more than 100 times lower than current industry standards.</span><br /><span></span></p>\r\n<p><span>At the core of this breakthrough is FPGA technology. FPGAs have long been considered a leading solution for high-speed processing, yet their steep learning curve has kept them out of reach for most audio professionals.</span><br /><span>Polyphonic removes this barrier by enabling FPGA use without requiring specialized hardware expertise, thanks to our dedicated compiler. This unlocks new possibilities for professional audio applications.</span><br /><span>Designed for researchers, engineers, and developers in acoustics and music technology, our platform streamlines and accelerates the development of spatialized, immersive, and interactive audio applications.</span><br /><span>From active noise cancellation and radio studios to live performances and 3D audio, Polyphonic is redefining what&rsquo;s possible in real-time audio processing, unlocking possibilities and pushing the boundaries of audio innovation.</span></p>\r\n<p><span>Polyphonic is a project born within the EMERAUDE research team (INSA Lyon, Inria) and supported by Inria Startup Studio.</span></p>\r\n<p><a href=\"https://polyphoniclab.github.io/\" target=\"_blank\">link to the project website</a></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 103262,
            "forum_user": {
                "id": 103131,
                "user": 103262,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/df25c0cc607f87457c0c7dfe70d0f9fb?s=120&d=retro",
                "biography": "Maxime Popoff is a researcher and entrepreneur specializing in electronic and embedded systems for audio processing. He holds a PhD from INSA Lyon (Institut National des Sciences Appliquées) in France, where he conducted research on embedded audio platforms and their programming within the Emeraude team (INSA Lyon, Inria, GRAME).\n\nHe studied at Grenoble-INP and gained industry experience as an engineer at CEA Grenoble before joining Inria in 2020. His work focuses on real-time multichannel audio processing, FPGA-based architectures, and software tools that bridge the gap between high-performance computing and creative applications.\n\nHe is the founder of Polyphonic, a project aimed at democratizing high-performance audio processing by leveraging FPGA technology to provide low-latency, scalable, and flexible solutions for spatial and interactive audio applications.",
                "date_modified": "2025-03-02T15:15:22.710913+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "mpopoff",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "polyphonic-reinventing-real-time-multichannel-audio-processing-with-fpga-technology",
        "pk": 3299,
        "published": true,
        "publish_date": "2025-02-19T15:02:20+01:00"
    },
    {
        "title": "synaptic._null: An Experimental Audiovisual Performance on Perceptual Collapse by Kaiyuan Tang(China)",
        "description": "An exploration of perception, logic breakdown, and drifting consciousness through real-time audiovisual performance, combining Ableton Live and TouchDesigner in an immersive setting.",
        "content": "<p></p>\r\n<p><em>synaptic._null</em> is an experimental audiovisual performance created by Arcky Tang, an artist working between sound, image, and real-time systems. The project investigates the collapse of perception and the instability of consciousness, drawing from philosophical frameworks such as Baudrillard&rsquo;s <em>Simulacra and Simulation</em> and Morton&rsquo;s concept of hyperobjects, alongside phenomenology and quantum cognition.<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4f21f86f6087119224c0c76225f9fedf.jpg\" /></p>\r\n<p>The work unfolds as a live performance in three nonlinear parts&mdash;<strong>Simulacrum, Paradox, and Consciousness</strong>&mdash;each blurring the boundary between sensory input and cognitive expectation. Rather than following a linear narrative, the performance operates as a generative system where sound and image interact, dissolve, and reassemble in unpredictable ways.</p>\r\n<p>Technically, the piece is constructed through a feedback loop between <strong>Ableton Live</strong> (handling modular sound design, looping, and experimental structures) and <strong>TouchDesigner</strong> (driving generative and audio-reactive visuals). Signals are exchanged in real time via MIDI/OSC, allowing each domain to destabilize and reshape the other. This system reflects the project&rsquo;s conceptual interest in perceptual collapse: just as one thinks they recognize a pattern, it dissolves into noise, only to re-emerge in a different form.<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/544abe1ffe7fc618dedf59a7b7ee2097.png\" /></p>\r\n<p><em>synaptic._null</em> was presented at <strong>Outernet London</strong> as part of the RCA Digital Direction graduation program, staged across immersive projection surfaces with multichannel sound. The performance emphasizes immediacy and ephemerality&mdash;its form is never fixed, and each iteration becomes a unique drift through audiovisual instability.</p>\r\n<p>At its core, the project aims to question how we construct meaning under conditions of uncertainty. By placing the audience inside a constantly shifting sensory environment, <em>synaptic._null</em> invites participants to experience not clarity, but <strong>the beauty of collapse itself&mdash;where perception becomes porous, and consciousness drifts beyond stable form</strong>.</p>",
        "topics": [
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 850,
                "name": "experimental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 146,
                "name": "Perception",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 126555,
            "forum_user": {
                "id": 126388,
                "user": 126555,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/2_fDgLESh.jpg",
                "avatar_url": "/media/cache/93/57/935711d78c8c07a43b4eb327169e2aaa.jpg",
                "biography": "Arcky is an audiovisual artist working at the intersection of surreal abstraction, consciousness exploration, and sensory experimentation. His practice expands the boundaries of perception and dissolves the logic of known reality—seeking transcendence through the collapse of structure and the intuitive resonance of sound and light.\n\nDeeply influenced by phenomenology, stream-of-consciousness aesthetics, and meditative states, Arcky’s work embraces dream logic, glitch textures, and ephemeral visuals to create immersive, improvisational performances. These performances become portals for nonlinear storytelling and cognitive dissonance, inviting the audience into a fluid space between detachment and empathy, where perception folds and time distorts.\n\n\nArcky believes the world is an absurd illusion. Yet, by surrendering to the unknown and embracing the instability of logic, one may unlock new dimensions of being. For him, creation is not about control—but about letting go, listening deeply, and finding meaning in the unseen.",
                "date_modified": "2025-10-12T14:22:06.449888+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "arckytang",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "synaptic_null-an-experimental-audiovisual-performance-on-perceptual-collapse-by-kaiyuan-tang",
        "pk": 3779,
        "published": true,
        "publish_date": "2025-10-07T14:37:28+02:00"
    },
    {
        "title": "Introduction to Offline Algorithmic Audio with bellplay~ by Felipe Tovar-Henao",
        "description": "This workshop offers a practical, hands-on introduction to offline audio generation, analysis, and processing in bellplay~, a scripting-based environment using the bell programming language. \r\nUnlike real-time environments such as Max or SuperCollider, bellplay~ renders sound offline rather than in real-time. This approach enables techniques such as multi-pass and look-ahead processing, computationally expensive operations, non-causal behavior, and analysis-driven transformations without concern for CPU limits or polyphonic voice management and allocation. Additionally, it's designed to bridge symbolic music (i.e., notation-based) operations with audio, facilitating the composition of acoustic, electronic, and mixed music within a single environment.\r\nFor more information, visit: https://bellplay.net",
        "content": "<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong></strong></p>\r\n<p><strong><img alt=\"bellplay\" src=\"https://forum.ircam.fr/media/uploads/user/6eebcbdd70901670a4ff7f87e2dcc10f.png\" /></strong></p>\r\n<p><strong>bellplay~ Workshop</strong></p>\r\n<p>bellplay~ is a scripting environment for audio and music composition. Unlike real-time environments such as Max or SuperCollider, it renders sound offline, which opens up techniques such as multi-pass and look-ahead processing, computationally expensive operations, non-causal behavior, and analysis-driven transformations &mdash; without the constraints of CPU limits or voice management. It also bridges symbolic, notation-based operations with audio, making it suitable for acoustic, electronic, and mixed music within a single environment.</p>\r\n<p>For more information, visit: <a href=\"https://bellplay.net\">https://bellplay.net</a>.</p>\r\n<hr />\r\n<p><strong>Before</strong></p>\r\n<p>You will receive setup instructions in advance. Please install and configure bellplay~ on your laptop (Mac or Windows) before the session so we can use the time efficiently.</p>\r\n<hr />\r\n<p><strong>During</strong></p>\r\n<p>The workshop builds progressively toward writing your own granular processing script, using this as a vehicle for introducing the environment and its core features. The session runs 60 minutes.</p>\r\n<p><em>Foundations:</em> launching the environment, writing and running a script, generating a simple sound and placing it on a timeline, and rendering audio to disk.</p>\r\n<p><em>Core project &mdash; granulation:</em> importing a sample into a buffer; generating grains by defining onset position, duration, gain, and stereo placement through scripted rules; looping and layering grains to build dense textures; and rendering, adjusting parameters, and re-rendering iteratively.</p>\r\n<p><em>Optional topics (time permitting):</em> running basic analysis &mdash; such as onset detection or spectral centroid &mdash; and using the results to shape grain behavior; exporting rendered audio or buffer data for use outside the environment.</p>\r\n<hr />\r\n<p><strong>After</strong></p>\r\n<p>By the end of the session you will have a working granulation script and a clear understanding of how bellplay~ handles audio algorithmically. You will also receive example scripts and documentation to continue working independently after the workshop.</p>\r\n<p>This workshop is aimed at composers and sound artists interested in direct, fine-grained control over sound through code, without the constraints of real-time interaction or visual programming.</p>",
        "topics": [
            {
                "id": 669,
                "name": "Bach",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2526,
                "name": "bell",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4290,
                "name": "computer-assisted algorithmic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2545,
                "name": "sound design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 7953,
            "forum_user": {
                "id": 7950,
                "user": 7953,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/author_headshot.png",
                "avatar_url": "/media/cache/5f/53/5f533165fbe54973a10eec94546b99f0.jpg",
                "biography": "Felipe Tovar-Henao is a US-based multimedia artist, developer, and researcher whose work explores computer algorithms as expressive tools for human and post-human creativity, cognition, and pedagogy. This has led him to work on a wide variety of projects involving digital instrument design, software development, immersive art installations, generative audiovisual algorithms, machine learning, music information retrieval, human-computer interaction, and more. His music is often motivated by and rooted in transformative experiences with technology, philosophy, and cinema, and it frequently focuses on exploring human perception, memory, and recognition.\n\nHe has held research and teaching positions at various institutions, including as the 2021/22 CCCC Postdoctoral Researcher at the University of Chicago, Lecturer in Music Theory and Composition at Universidad EAFIT, as well as Associate Instructor and Coordinator of the IU JSoM Composition Department. He currently serves as the 2023/25 Charles H. Turner Postdoctoral Fellow in Music Composition at the University of Cincinnati's College-Conservatory of Music.",
                "date_modified": "2026-03-02T21:34:35.083680+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1004,
                        "forum_user": 7950,
                        "date_start": "2016-06-13",
                        "date_end": "2025-11-13",
                        "type": 0,
                        "keys": [
                            {
                                "id": 637,
                                "membership": 1004
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "felipetovarhenao",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "introduction-to-offline-algorithmic-audio-with-bellplay-by-felipe-tovar-henao",
        "pk": 4401,
        "published": true,
        "publish_date": "2026-02-20T05:54:59+01:00"
    },
    {
        "title": "Buy Premium original-pashmina-shawls Online with Authentic Craftsmanship",
        "description": "original-pashmina-shawls are known for their unmatched softness, warmth, and timeless elegance. Crafted from fine Himalayan wool and handwoven by skilled artisans, these shawls represent true luxury and heritage. Whether for weddings, winter styling, or everyday fashion, original-pashmina-shawls add a touch of sophistication to any outfit.",
        "content": "<h2>Introduction to original-pashmina-shawls</h2>\n<p><a href=\"https://elaboreluxury.com/collections/original-pashmina-shawls\">original-pashmina-shawls </a>are one of the most luxurious and elegant fashion accessories in the world. Known as the &ldquo;soft gold of the Himalayas,&rdquo; these shawls are crafted from the fine wool of the Changthangi goat, making them incredibly soft, lightweight, and warm.</p>\n<p>These shawls are not just winter wear&mdash;they are a symbol of tradition, craftsmanship, and timeless style.</p>\n<hr>\n<h2>What Makes original-pashmina-shawls Special?</h2>\n<p>The beauty of original-pashmina-shawls lies in their authenticity and craftsmanship. Each shawl is hand-spun and handwoven by skilled Kashmiri artisans using traditional techniques passed down for generations.</p>\n<p>Key features include:</p>\n<ul>\n<li>Ultra-soft and lightweight fabric</li>\n<li>Exceptional warmth without heaviness</li>\n<li>Elegant and timeless designs</li>\n<li>Long-lasting quality</li>\n</ul>\n<hr>\n<h2>The Craftsmanship Behind original-pashmina-shawls</h2>\n<p>Creating original-pashmina-shawls is a detailed and time-consuming process. It involves:</p>\n<ul>\n<li><strong>Wool Collection:</strong> Sourced from Himalayan goats</li>\n<li><strong>Hand Spinning:</strong> Done using traditional methods</li>\n<li><strong>Hand Weaving:</strong> Crafted on wooden looms</li>\n<li><strong>Finishing &amp; Embroidery:</strong> Adds artistic detailing</li>\n</ul>\n<p>Each piece can take weeks or even months to complete, making every shawl unique.</p>\n<hr>\n<h2>Types of original-pashmina-shawls</h2>\n<p>There are several styles available to suit different preferences:</p>\n<ul>\n<li>Kani Pashmina Shawls</li>\n<li>Embroidered Pashmina Shawls</li>\n<li>Zari Pashmina Shawls</li>\n<li>Kalamkari Pashmina Shawls</li>\n<li>Pure Pashmina Shawls</li>\n</ul>\n<p>These varieties offer both traditional and modern design options.</p>\n<hr>\n<h2>Why Choose original-pashmina-shawls?</h2>\n<p>Choosing original-pashmina-shawls means investing in quality, luxury, and heritage.</p>\n<ul>\n<li>100% authentic craftsmanship</li>\n<li>Perfect for weddings and special occasions</li>\n<li>Ideal for both ethnic and western outfits</li>\n<li>Sustainable and handmade</li>\n</ul>\n<p>In today&rsquo;s market, where many fake products exist, buying authentic Pashmina ensures true value and durability.</p>\n<hr>\n<h2>Styling Tips for original-pashmina-shawls</h2>\n<p>You can style original-pashmina-shawls in multiple ways:</p>\n<ul>\n<li>Drape over shoulders for a classic look</li>\n<li>Pair with western outfits for modern styling</li>\n<li>Use as a winter wrap for warmth</li>\n<li>Style for weddings and festive occasions</li>\n</ul>\n<hr>\n<h2>Conclusion</h2>\n<p>original-pashmina-shawls are more than just fashion accessories&mdash;they are a blend of tradition, luxury, and timeless elegance. Their unmatched softness, warmth, and handcrafted beauty make them a must-have for every wardrobe.</p>\n<p>If you are looking for premium quality and authentic craftsmanship, original-pashmina-shawls are the perfect choice for both style and comfort.</p>",
        "topics": [
            {
                "id": 4536,
                "name": "original-pashmina-shawls",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4537,
                "name": "pashmina shawls for women",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166341,
            "forum_user": {
                "id": 166105,
                "user": 166341,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a053613fe6f95130b8e798ec65e5832b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-01T13:44:58.436606+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "elaboreluxury",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "buy-premium-original-pashmina-shawls-online-with-authentic-craftsmanship",
        "pk": 4572,
        "published": false,
        "publish_date": "2026-04-01T14:10:23.897721+02:00"
    },
    {
        "title": "Tweak de la semaine (W25)",
        "description": "Patch interactif intégré - Nouveau chaque semaine !",
        "content": "<div style=\"position: relative; padding-bottom: 65%; height: 0; border-radius: 10px; overflow: hidden;\"><iframe width=\"300\" height=\"150\" style=\"border: none; position: absolute; top: 0; left: 0; width: 100%; height: 100%;\" src=\"https://tweakable.org/embed/examples/munk_v5?view=panel\" frameborder=\"0\"></iframe></div>\r\n<h4 style=\"position: relative; padding-bottom: 65%; height: 0; border-radius: 10px; overflow: hidden;\"><span>Cr&eacute;ez votre propre Tweakable sur&nbsp;</span><a href=\"http://tweakable.org/\">tweakable.org</a>.</h4>",
        "topics": [
            {
                "id": 70,
                "name": "Audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 396,
                "name": "Audio-visual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 272,
                "name": "Generative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 206,
                "name": "Interactive real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 127,
                "name": "Video",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18424,
            "forum_user": {
                "id": 18417,
                "user": 18424,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d36f7c122c36bf714b376ed2c132c929?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jwvsys",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tweak-of-the-week",
        "pk": 667,
        "published": true,
        "publish_date": "2020-06-05T12:35:49+02:00"
    },
    {
        "title": "Physicalité et emotionalité augmentées",
        "description": "Résidence en recherche artistique 2017.18.\r\nEmanuele Palumbo.\r\nEn collaboration avec l'équipe-projet CREAM et l'équipe Analyse des pratiques musicales de l’Ircam-STMS.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\"></h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>R&eacute;sidence en recherche artistique 2017.18</h3>\r\n<p><strong>Physicalit&eacute; et emotionalit&eacute; augment&eacute;es</strong><br />En collaboration avec l'&eacute;quipe-projet C<a href=\"https://www.ircam.fr/project/detail/cream/\">REAM</a><span>&nbsp;</span>et l'&eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/apm/\">Analyse des pratiques musicales<span>&nbsp;</span></a>de l&rsquo;Ircam-STMS.</p>\r\n<p>Le projet de recherche &eacute;tudiera les interactions simples et de caract&egrave;re scientifique permettant de cr&eacute;er un solf&egrave;ge des param&egrave;tres physiologiques, de la perception et des r&eacute;ponses &eacute;motionnelles engendr&eacute;s dans une pi&egrave;ce de musique jou&eacute;e en live, pour un instrumentiste et pour un performeur qui a le r&ocirc;le de r&eacute;sonance &eacute;motionnelle. Une premi&egrave;re phase du projet pr&eacute;voit la construction et la documentation du dispositif de captation. Successivement, suite &agrave; des exp&eacute;riences scientifiques et artistiques (&eacute;tudes), on veut travailler et organiser des &eacute;tats physiologiques et &eacute;motionnels pr&eacute;cis des interpr&egrave;tes en solo ou en combinaison. Une caract&eacute;ristique importante du projet sera le feedback qu&rsquo;il est possible de cr&eacute;er entre captation physiologique et &eacute;motionnelle de l&rsquo;interpr&egrave;te, la musique cr&eacute;&eacute;e en temps r&eacute;el avec ces donn&eacute;es, et son &eacute;coute qui change son &eacute;tat &eacute;motionnel. L&rsquo;instrument choisi est le saxophone, mais d&rsquo;autres instruments &agrave; vent pourront &ecirc;tre &eacute;tudi&eacute;s, la nature du performeur est &agrave; pr&eacute;ciser. La phase finale du projet est consacr&eacute;e &agrave; produire la documentation de la recherche et &agrave; r&eacute;diger le solf&egrave;ge. Ce dispositif qui veut &ecirc;tre un IMC (Integral Music Controller, selon la d&eacute;finition de Knapp et Cook [2005]), a pour ambition de cr&eacute;er un dispositif de physicalit&eacute; et &eacute;motionnalit&eacute; augment&eacute;es.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Emanuele Palumbo</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\"><img src=\"https://www.ircam.fr/media/uploads/personnels/cursus%202/.thumbnails/emanuele_palumbo.jpg/emanuele_palumbo-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Emanuele Palumbo (Italie, 1987) &eacute;tudie la composition au conservatoire de Milan puis au Conservatoire national sup&eacute;rieur de musique et de danse de Paris dans la classe de G&eacute;rard Pesson. Il re&ccedil;oit l&rsquo;enseignement d&rsquo;H&egrave;ctor Para dans le cadre des cursus 1 &amp; 2 du cursus de composition et d'informatique musicale de l&rsquo;Ircam. Il participe &agrave; de nombreuses master classes aupr&egrave;s de compositeurs comme Francesco Filidei, Franck Bedrossian, Pierluigi Billone, Stefano Gervasoni, Rapha&euml;l Cendo et Mark Andre. Ses &oelig;uvres ont &eacute;t&eacute; jou&eacute;es par l'Ensemble Linea, l&rsquo;Ensemble Multilat&eacute;rale, le MDI Ensemble, l&rsquo;Ensemble Talea, ainsi que par des solistes comme Alfonso Alberti et Christophe Mathias, et diffus&eacute;es sur France Musique. Dans sa musique, il cr&eacute;e des structures temporelles musicales &agrave;̀ partir de sons issus d&rsquo;une recherche instrumentale qui a comme objectif la force et l&rsquo;unicit&eacute;́ de la mati&egrave;re sonore. Il cr&eacute;e des modes de jeux inhabituels en utilisant, parfois, des accessoires pour obtenir des sonorit&eacute;s nouvelles. L&rsquo;ensemble instrumental devient ainsi un espace chargé d&rsquo;une nouvelle aura. Il travaille actuellement sur une pi&egrave;ce pour piano et transducteurs, et commence un projet de musique biophysiologique avec des danseurs, des interpr&egrave;tes et des musiciens. Il y d&eacute;veloppe, en particulier, un dispositif de reconnaissance physiologique et &eacute;motionnelle (LISTEN).</p>\r\n</div>\r\n</div>\r\n<p><strong>Courriel :</strong><span>&nbsp;</span>Emanuele.Palumbo (at) ircam.fr</p>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"https://soundcloud.com/emanuele-palumbo\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>https://soundcloud.com/emanuele-palumbo</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "physicalite-et-emotionalite-augmentees",
        "pk": 28,
        "published": true,
        "publish_date": "2019-03-21T17:02:27+01:00"
    },
    {
        "title": "AI, networked performance and aesthetic judgment - Hans Kretz",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>This practice-based research presentation will draw on examples from my own experience using JackTrip &ndash; in close collaboration with its developers at CCRMA &ndash; in directing and conducting student ensembles in projects that involve AI and networked performance.I will endeavor to highlight the anthropo-philosophical distinctions that govern the use of the expressions: creativity, imagination, fantasy, in order to better identify, beyond its everyday and generic use, the cognitive, philosophical and aesthetic implications of co-creativity, and how this notion necessarily informs any attempt to reinterpret these notions.<br />The use of software allowing co-creativity modifies to a great extent the reflexes acquired by the students in the context of improvisation. In doing so, it is the whole activity of the student's aesthetic judgment that is put back at the center. I will try to show how HCI restores to the aesthetic judgment its capacity of cognitive orientation by reactivating the exercise of the philosophical judgment.<br />By performing these AI assisted improvisations in an online space, through the use of the low latency, uncompressed network audio software Jack trip, a distributed space is created in which each performer occupies a unique role in a non hierarchical relation.</p>\r\n<p>The digital mediation of each performer through the networked audio and its reconstitution in a single online hub, which can share a single virtual acoustic treatment renders the relationship with the AI agent in a new light.</p>",
        "topics": [],
        "user": {
            "pk": 24769,
            "forum_user": {
                "id": 24742,
                "user": 24769,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/kretz_photo_florence.jpeg",
                "avatar_url": "/media/cache/32/9c/329c88320f1a66b9f139ae07d4bbffc7.jpg",
                "biography": "Hans Kretz is a conductor, pianist, researcher and author. He holds PhDs in Music and Philosophy from the University of Leeds and the University of Paris 8 Vincennes-Saint-Denis respectively. His research interests include philosophy of culture, aesthetics, philosophical anthropology and philosophy of technology. His writings have appeared in the Recherches d'Esthétique Transculturelle series of L'Harmattan, and in the Cahiers Critiques de Philosophie. He is a Lecturer at Stanford University, where he currently conducts and directs the Stanford New Ensemble.",
                "date_modified": "2025-12-28T14:44:33.622746+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 979,
                        "forum_user": 24742,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "hkretz",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ai-networked-performance-and-aesthetic-judgment",
        "pk": 2095,
        "published": true,
        "publish_date": "2023-02-28T17:11:37+01:00"
    },
    {
        "title": "The Symphony of Civilisation by Jeanyoon Choi & Suhyun Lim",
        "description": "The Symphony of Civilisation is a multi-device web artwork, encompassing more than ten channels in a symphonic format. Structured in four movements, it offers a loosely connected cross-section abstract representation of civilisation’s past, present, and future within an immersive setting.",
        "content": "<p>Our civilisation is brilliant, unfathomable almost. Reflect on humanity fulfilling the dream of flight &ndash; the evolution of marvellous transportation enveloping the spatial-temporal dimension we inhabit. Look at Artificial Intelligence, a complex system we built but operates beyond our comprehension. Observe the new epoch marked at Crawford Lake, signalling the time our civilisation dominates ecology. Welcome to the Anthropocene</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0c3ae80773547b154f2f901718b04ac3.jpg\" /></p>\r\n<p>It is undeniable we are living in a golden age. But are we? Isn't our fragmented existence leading to mental illness? What about the persistent conflicts around the world? What about the accelerating climate crisis or the rise of numbers over humaneness? These are indeed troubling times, as civilisation seems more at risk of decline than of flourishing, with the Doomsday Clock standing at 90 seconds to midnight.</p>\r\n<p>Where are we headed? How can we resolve these issues for the present and the future?</p>\r\n<p>Composed in four movements, the Symphony of Civilisation mirrors the symphonic format where each represents a certain period of humanity: Ancient, Post-Industrial, Contemporary, and Future. Rather than explicitly illustrating these eras, the symphony presents four contrasting cross-sections of civilisation. Just as traditional symphonies didn't narrate their stories directly but communicated the composer's intention through melodies and rhythms, this new multi-device web symphony&rsquo;s audio-visual scape is designed to poetically immerse audiences through audio-visual outputs, creating harmony from multiple channels.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/805ad99311311e82968661f220f7d3f6.jpg\" /></p>\r\n<p>The first three movements illustrate accelerationism, depicting an accelerating rhythm and tension. This suggests that our civilisation is accelerating beyond control, as hyperobjects exist beyond our understanding. Movement Three, for instance, highlights this theme with screens flickering rapidly across all channels at a faster speed beyond our perception. Each screen symbolises our fragmented and segmented contemporary world, all trying to optimise and shine in its own direction yet failing to improve society as a whole; We are still not clever enough to realise that optimising parts doesn't equal optimising the whole. Welcome to this accelerating zero-sum game. How will this end? Should we accelerate further, thus accelerating the catastrophe, as Marxist Accelerationists once claimed?</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/05d8e14bb7e400b9d7dab03b547ef176.jpg\" /></p>\r\n<p>Here emerges the philosophy of Dionysian togetherness within the context of this artwork. Movement Four, the finale of the symphony, portrays a speculative future where individual boundaries fade and people are collectively immersed in a Dionysian experience. This is the only interactive movement, where audience members co-compose the symphony - contrasting sharply with the previous movement, where chaos emerges but audiences had no control other than merely passive spectators. In this fourth movement, audiences scan a QR code and conduct the symphony from their mobiles. The harder the phones are shaken, the louder the audiovisual experience becomes. Audience members&rsquo; faces are collaged from different angles and appear on the projector holistically, creating a profound sense of immersion and eliciting primal goosebumps. This immersive experience suggests that the future of civilisation should emerge from Dionysian togetherness. The symphony concludes that the alternative future towards envisioning Dong-Dong can be cultivated by our own hands, aspiring towards a brighter collective future, one where harmony arises from disharmony and collectiveness emerges from individualism.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/717a2e14612dc5a413bd8d1583badc89.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>As a multi-device web symphony, The Symphony of Civilisation incorporates more than ten channels - including projectors, large displays, laptops, and audiences&rsquo; mobiles - each acting as an interconnected audio-visual instrument conducted from a single laptop. The harmony created from multiple devices forms a unique audiovisual landscape depicting the past, present, and future of civilisation, reminiscent of portraying the cityscape of each era. The four movements of the symphony each represent the Ancient, Post-Industrial, Present, and Future periods in chronological order.&nbsp;</p>\r\n<p>The First Movement: The Birth, symbolises the dawn of ancient civilisations worldwide. It begins with pure noise - a representation of nature and pure ignorance - soon transformed into vertical stripes, symbolising the birth of the artificial from the wild. Subsequently, a series of ancient architectural forms are displayed atop these stripes. With an accelerating rhythm, different architectures from early civilisations around the world are illustrated - from the Egyptians to the Silla Dynasty, depicting the birth of diverse civilisations worldwide.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1ed7b3b52b3ee79a3690f2e7cc175cf0.jpg\" /></p>\r\n<p>The Second Movement: The Rise, illustrates the accelerating progress of civilisation within the post-Industrial era. We particularly focus on the evolution of transportation modes - from steamboats to jet planes - which have transformed the spatial-temporal dimension humanity inhabits, heavily influencing industrial civilisation. This movement features imagery generated by Midjourney, producing photorealistic images in a circular layout, poetically depicting the evolution of transportation and civilisation - as well as highlighting the homogeneity, contrasted to the diverseness of the earlier movement.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6a0508369e6ac8694a5cade7aa716343.jpg\" /></p>\r\n<p>The Third Movement: The Rhythm, depicts contemporary civilisation. Various aspects of digital consumerism - anonymous SNS profiles, shopping malls, online stocks, advertisements, notifications, delivery apps, memes, and ranking systems - are shown rhythmically across all channels. Initially uniform at 120 beats per second, the Tone.js-generated rhythm becomes gradually irregular and non-linear, with unprecedented acceleration and deceleration. Audiences experience immersive chaos across all channels, all altered following a single repetitive yet unstoppable rhythm. This chaos across various channels represents the segmented and highly individualised contemporary civilisation, where all screens - all individuals - strive for their own success with full effort, which actually leads nowhere - depicting the gigantic zero-sum game we reside within.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/076860a6060f8b420d9782fff43dc4fa.jpg\" /></p>\r\n<p>The Fourth Movement: The Dionysian, is the symphony's most interactive composition. Initiated from silence, it invites audiences to scan a QR code displayed on the screen. They are then prompted to shake their phones, with the mobile accelerometer causing the surrounding screens to brighten, and the Mahler No.1 Symphony to enlarge with each shake, fulfilling and augmenting the space. Webcams from different channels are interconnected through WebRTC, mashing faces from different angles across all screens. This collective experience allows many audiences to conduct and control the entire room together embodying Nietzsche's notion of the Dionysian Immersiveness. It is the most poetic, interactive, communal, and hopeful movement of all symphony, with its communal interactiveness sharply contrasted to the previous movement, highlighting the theme and importance of togetherness.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a843299b4ab63aa2321982d7f5df9286.jpg\" /></p>\r\n<p>The symphony utilises multiple desktop-connected projectors, over five laptops, and audiences&rsquo; mobile phones to compose a Multi-Device Web Symphony. Audiences are invited to use their own laptops and mobiles to participate and co-create this symphony. This presents an experimental form of new media art, situated hybridly between installation and performance. We believe that as techno-humans, exploring digital media in novel ways beyond AI's capabilities depicts the crucial factor of humanness. We hope that the Multi-Device Web Symphony, unlike single-device experiences, can present the potential of both interactivity and collaborative immersiveness converged.</p>\r\n<p>Why present civilisation as the first Multi-Device Web Symphony? Numerous media artworks depict the future through descriptive speculations, often with highly polarised utopian or dystopian visions. Alternatively, we believe that the complexity of the contemporary world requires a more conceptual approach. We wanted to create a work that facilitates subtle reflection among audiences on the past and present of civilisation, guiding them towards an interactive and communal future by the end of the symphony. This idea led to the creation of a chronicle of civilisation in a symphonic format.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/55bbb2d48dfc48d428adbf2b20ad07b9.jpg\" /></p>\r\n<p>The technical production of this artwork also aligns with these principles. Multi-channel screens showcase different aspects of civilisation, enabling the construction of a multi-layered composition that traditional moving images cannot produce. Various Javascript-based Frontend frameworks - React.js, Next.js, Styled Components -&nbsp; were employed for this composition. Specifically, we employed a Mobile Accelerometer propagated over WebSocket in real-time within Movement Four to give audiences an experience of conducting the whole surroundings by shaking their mobile. This symbolically highlights the importance of Sartrean &lsquo;Engagement&rsquo; towards the futuristic Dionysian vision we shall all co-create.&nbsp;</p>\r\n<p><br />This symphony will be premiered during the IRCAM Seoul Forum.</p>",
        "topics": [
            {
                "id": 2316,
                "name": "Jeanyoon Choi",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2319,
                "name": "Multi-Device Web Artwork",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2317,
                "name": "Suhyun Lim",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2320,
                "name": "Symphony of Civilisation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2318,
                "name": "The Symphony",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32475,
            "forum_user": {
                "id": 32427,
                "user": 32475,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Portrait_2.jpeg",
                "avatar_url": "/media/cache/0f/92/0f92d89c1ddc983d1436d40df37fe91c.jpg",
                "biography": "Jeanyoon Choi (b.1999) is a Korean Computational Artist, Creative Developer, and Inter-Device Interaction Designer. Putting the interconnection between mobile and screen devices in the core, he researches the possibility of reality and physicality induced from a purely digital domain, enabling an immersive dimension which is embedded within the real (Embedded Reality), rather than being segmented from the real (Virtual Reality). \n\nHis works have been exhibited and performed globally at Ars Electronica (AT), Istanbul Digital Art Festival (TK), Korean Cultural Centre Paris, IRCAM Forum (FR), IKLECTIK, Cromwell Place, The Place London, and Crypt Gallery (UK), Macedonian Museum of Contemporary Art, Athens Digital Art Festival (GR), ARKO Arts Theatre, Seoul Arts Centre (KR), and Manuka Arts Centre (AU). He also participated in various collaboration projects with Google, NASA JPL, KAIST Center for Anthropocene Studies, and Snapchat.He studied BSc Industrial Engineering at Seoul National University (KR), pursued an MA in Information Experience Design at the Royal College of Art (UK), and is currently a PhD Candidate at KAIST (KR), where he is a member of the Experience Design Lab (XD Lab)",
                "date_modified": "2024-10-29T11:28:34.872110+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ericggul",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-symphony-of-civilisation",
        "pk": 3042,
        "published": true,
        "publish_date": "2024-10-21T10:29:21+02:00"
    },
    {
        "title": "Overview of IRCAM’s Research and Technologies for Artistic Creation by Hugues Vinet",
        "description": "The aim of this conference is to provide an overview of recent and ongoing research and technological developments at IRCAM for sound and music creation and the resulting tools available through the IRCAM Forum - sound synthesis and processing, sound spatialisation, computer-assisted composition and orchestration, gesture-sound interaction, real-time languages, generative systems for improvisation, sound design.... It will be illustrated by numerous audio and video examples.",
        "content": "<p style=\"font-weight: 400;\">The aim of this conference is to provide an overview of recent and ongoing research and technological developments at IRCAM for sound and music creation and the resulting tools available through the IRCAM Forum - sound synthesis and processing, sound spatialisation, computer-assisted composition and orchestration, gesture-sound interaction, real-time languages, generative systems for improvisation, sound design.... It will be illustrated by numerous audio and video examples.</p>\r\n<p style=\"font-weight: 400;\"><img src=\"/media/uploads/image_studio_avec_chercheurs_ircam.jpeg\" alt=\"\" width=\"553\" height=\"369\" /></p>",
        "topics": [],
        "user": {
            "pk": 18210,
            "forum_user": {
                "id": 18203,
                "user": 18210,
                "first_name": "Hugues",
                "last_name": "Vinet",
                "avatar": "https://forum.ircam.fr/media/avatars/Hugues_Vinet_Portrait2017_large_low.jpg",
                "avatar_url": "/media/cache/4c/92/4c92397e1e69913141f89327eccc6007.jpg",
                "biography": "Hugues Vinet is Director of Innovation and Research Means of IRCAM. He has managed all research, development and innovation activities at IRCAM since 1994. He co-founded and ran for several terms the STMS (Science and Technology of Music and Sound) joint lab with French Ministry of Culture, CNRS and Sorbonne Université. He previously worked at the Groupe de Recherches Musicales of National Institute of Audiovisual in Paris where he managed the research and designed the first versions of the award winning real-time audio processing GRM Tools product. He has coordinated many collaborative R&D projects including recently H2020 VERTIGO in charge of the STARTS Residencies program managing 45 residencies of artists with technological research projects throughout Europe. He is currenty IRCAM's PI for EU MediaFutures project (artistic residencies for innovation in media) and DAFNE+ project dedicated to creatives' communities based on blockchain/NFT/DAO. He also curates the Vertigo Forum art-science yearly symposium at Centre Pompidou. He participates in various bodies of experts in the fields of audio, music, multimedia, information technology and innovation.",
                "date_modified": "2026-02-26T18:55:39.688865+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 417,
                        "forum_user": 18203,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "vinet",
            "first_name": "Hugues",
            "last_name": "Vinet",
            "bookmarks": []
        },
        "slug": "overview-of-ircams-research-and-technologies-for-artistic-creation-by-hugues-vinet-1",
        "pk": 3060,
        "published": true,
        "publish_date": "2024-10-23T11:25:01+02:00"
    },
    {
        "title": "Discover Elegant kashmiri-shawls for Timeless Style & Warmth",
        "description": "kashmiri-shawls are a symbol of luxury, tradition, and fine craftsmanship. Handwoven using premium wool, pashmina, and cashmere, these shawls offer unmatched softness and warmth. Perfect for winter wear, weddings, and everyday elegance, kashmiri-shawls reflect timeless beauty and cultural heritage.",
        "content": "<h2>Introduction to kashmiri-shawls</h2>\n<p><a href=\"https://elaboreluxury.com/collections/kashmiri-shawls\">kashmiri-shawls</a> are one of the most luxurious and elegant fashion accessories, known for their rich heritage and exceptional craftsmanship. Originating from the beautiful valleys of Kashmir, these shawls are handcrafted using traditional weaving techniques passed down through generations.</p>\n<p>They are not just winter wear but a symbol of culture, artistry, and timeless fashion.</p>\n<hr>\n<h2>What Makes kashmiri-shawls Unique?</h2>\n<p>The uniqueness of kashmiri-shawls lies in their fine materials and handcrafted detailing. These shawls are made from premium fibres such as Pashmina, Cashmere, and fine wool, offering superior softness and warmth.</p>\n<p>Key highlights include:</p>\n<ul>\n<li>Ultra-soft and lightweight texture</li>\n<li>Exceptional warmth for winter</li>\n<li>Intricate embroidery and traditional designs</li>\n<li>Long-lasting durability</li>\n</ul>\n<hr>\n<h2>Craftsmanship Behind kashmiri-shawls</h2>\n<p>The making of kashmiri-shawls involves a detailed and time-intensive process:</p>\n<ul>\n<li><strong>Raw Material Selection:</strong> Fine wool sourced from Himalayan regions</li>\n<li><strong>Hand Spinning:</strong> Fibres are spun using traditional methods</li>\n<li><strong>Hand Weaving:</strong> Crafted on wooden looms by skilled artisans</li>\n<li><strong>Embroidery Work:</strong> Sozni, Aari, or Kani techniques add artistic beauty</li>\n</ul>\n<p>Each shawl takes weeks or even months to complete, making every piece unique and valuable.</p>\n<hr>\n<h2>Types of kashmiri-shawls</h2>\n<p>There are different varieties of kashmiri-shawls available, including:</p>\n<ul>\n<li>Pashmina Shawls</li>\n<li>Kani Shawls</li>\n<li>Zari Shawls</li>\n<li>Kalamkari Shawls</li>\n<li>Cashmere Shawls</li>\n<li>Embroidered Shawls</li>\n<li>Woollen Shawls</li>\n</ul>\n<p>Each type offers a unique blend of tradition and modern style.</p>\n<hr>\n<h2>Why Choose kashmiri-shawls?</h2>\n<p>kashmiri-shawls are a perfect choice for those who value quality and elegance. They are:</p>\n<ul>\n<li>Handcrafted with authentic techniques</li>\n<li>Ideal for weddings and special occasions</li>\n<li>Suitable for both traditional and modern outfits</li>\n<li>A symbol of heritage and luxury</li>\n</ul>\n<p>These shawls are widely recognized as timeless fashion pieces with cultural significance.</p>\n<hr>\n<h2>Styling Tips for kashmiri-shawls</h2>\n<p>You can style kashmiri-shawls in multiple ways:</p>\n<ul>\n<li>Drape over shoulders for a royal look</li>\n<li>Pair with western outfits for fusion style</li>\n<li>Use as winter wraps for warmth</li>\n<li>Style for festive and wedding occasions</li>\n</ul>\n<hr>\n<h2>Care Tips for kashmiri-shawls</h2>\n<p>To maintain the quality of your kashmiri-shawls:</p>\n<ul>\n<li>Dry clean only</li>\n<li>Store in a cotton or muslin cloth</li>\n<li>Avoid direct sunlight and moisture</li>\n<li>Keep away from perfumes</li>\n</ul>\n<hr>\n<h2>Conclusion</h2>\n<p>kashmiri-shawls are more than just accessories&mdash;they represent tradition, luxury, and craftsmanship. With their unmatched softness, intricate designs, and timeless appeal, they are a must-have for every wardrobe.</p>\n<p>If you are looking for elegance, warmth, and authenticity, kashmiri-shawls are the perfect choice.</p>",
        "topics": [
            {
                "id": 4538,
                "name": "kashmiri-shawls",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 166341,
            "forum_user": {
                "id": 166105,
                "user": 166341,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a053613fe6f95130b8e798ec65e5832b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-01T13:44:58.436606+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "elaboreluxury",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "discover-elegant-kashmiri-shawls-for-timeless-style-warmth",
        "pk": 4573,
        "published": false,
        "publish_date": "2026-04-01T14:19:30.950804+02:00"
    },
    {
        "title": "Dans la tête de Gilgamesh - Fabrice Guédy, Armand Ledanois",
        "description": "Générer un oratorio électronique à partir de données cérébrales : l'épopée de Gilgamesh",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" width=\"990\" height=\"330\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Fabrice Gu&eacute;dy and Armand Ledanois<br /><a href=\"https://forum.ircam.fr/profile/Fabrice-Guedy/\">Biographie</a></p>\r\n<p>\"Dans la t&ecirc;te de Gilgamesh\" est un oratorio cod&eacute; en direct compos&eacute; par Fabrice Gu&eacute;dy sur un livret d'Armand Ledanois utilisant des donn&eacute;es EEG du cerveau et des mod&egrave;les math&eacute;matiques pour g&eacute;n&eacute;rer le son li&eacute; &agrave; chaque personnage et &agrave; chaque situation. Il a &eacute;t&eacute; cr&eacute;&eacute; lors de la f&ecirc;te de la science 2023 &agrave; l'Institut du Monde Arabe &agrave; Paris. Ce travail a &eacute;t&eacute; rendu possible gr&acirc;ce &agrave; la collaboration d'Etienne K&oelig;chlin, directeur du d&eacute;partement de neurosciences computationnelles de l'Ecole Normale Sup&eacute;rieure de Paris et d'Armand Ledanois, math&eacute;maticien. Il a &eacute;t&eacute; soutenu par l'Universit&eacute; PSL - \" partage des savoirs \", et r&eacute;alis&eacute; &agrave; l'Atelier des Feuillantines dans le cadre du programme de r&eacute;sidence \" Ecouter l'invisible \".</p>\r\n<p></p>\r\n<p><strong>Chapitres impliquant des algorithmes :</strong></p>\r\n<p><strong>- Activit&eacute; c&eacute;r&eacute;brale - EEG</strong></p>\r\n<p><strong>- Rythmes c&eacute;r&eacute;braux</strong></p>\r\n<p><strong>- Transmission de l'information - Potentiels d'action</strong></p>\r\n<p><strong>- Potentiels d'action - Canaux ioniques</strong></p>\r\n<p><strong>- La pr&eacute;diction de la catastrophe du d&eacute;luge primitif</strong></p>\r\n<p><strong>- La vague d&eacute;ferlante</strong></p>\r\n<p></p>\r\n<div class=\"embed-canva\" style=\"position: relative; width: 100%; height: 0; padding-top: 141.4286%; padding-bottom: 0; box-shadow: 0 2px 8px 0 rgba(63,69,81,0.16); margin-top: 1.6em; margin-bottom: 0.9em; overflow: hidden; border-radius: 8px; will-change: transform;\"><iframe width=\"300\" height=\"150\" style=\"position: absolute; width: 100%; height: 100%; top: 0; left: 0; border: none; padding: 0; margin: 0;\" loading=\"lazy\" src=\"https://www.canva.com/design/DAF89l2CZ3E/ODnWPiRWVFfm5T8ePGKJqA/view?embed\" allowfullscreen=\"allowfullscreen\" allow=\"fullscreen\">\r\n  </iframe></div>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1125,
            "forum_user": {
                "id": 1124,
                "user": 1125,
                "first_name": "Fabrice",
                "last_name": "Guédy",
                "avatar": "https://forum.ircam.fr/media/avatars/Fabrice_beaux_arts.jpg",
                "avatar_url": "/media/cache/07/50/07504576893a589dc3428d4de8ebc5ba.jpg",
                "biography": "Fabrice Guédy, composer, studied conducting, piano and composition in Paris. He teaches music analysis at Université de Paris Cité, piano and theory class at Atelier des Feuillantines, a conservatory and art school where students can learn simultaneously music and visual arts.\nAfter being assistant conductor of Daniel Barenboïm at « Orchestre de Paris », he entered the music research department of Ircam, worked with Gérard Assayag and André Riotte on composition formalization and new instrumental techniques.\nHe won the « Villa Medicis hors les murs » prize, and worked at UC-Santa Barbara. He was director of « Musique Lab 2 » project at Ircam, which consisted on developing a music pedagogy environment for music schools, allowing students to work directly with Ircam’s OpenMusic environment.\nAtelier des Feuillantines won the « Impact Societal » prize from Agence Nationale de la Recherche with Ircam’s ISMM team.\nHis compositions are played by ensembles like Ensemble Intercontemporain. Among his last works are « la Volière », with EIC and students from Conservatoire de Paris, and a piano concerto created by Madoka Fukami. He has created a live coding class at Atelier des Feuillantines.",
                "date_modified": "2024-05-23T20:24:25.788413+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Fabrice-Guedy",
            "first_name": "Fabrice",
            "last_name": "Guédy",
            "bookmarks": []
        },
        "slug": "dans-la-tete-de-gilgamesh",
        "pk": 2747,
        "published": true,
        "publish_date": "2024-02-16T16:39:20+01:00"
    },
    {
        "title": "Land Sound Promenade – Wind, Sun, Rain Musicalising the experience of an urban footbridge through environmental instruments by Nadine Schütz",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><span>➡️ Venue&nbsp;:&nbsp;</span><a href=\"https://maps.app.goo.gl/kMVbVCugFfUzcG7o8\" target=\"_blank\">Franchissement urbain Pleyel, Canton de Saint-Denis-1, 93200 SaintDenis</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><strong>Subway Line&nbsp;14 : Ch&acirc;telet -&gt; Saint-Denis - Pleyel</strong> (takes 29 min. from Ircam)&nbsp;</div>\r\n<div class=\"c-content__button\">\r\n<p><strong><span></span></strong></p>\r\n<p></p>\r\n<p><strong><span></span></strong></p>\r\n</div>\r\n<div class=\"c-content__button\">&nbsp;<img src=\"/media/uploads/promenade_sonore_-_vent_soleil_pluie_nadine_schütz_2024_(credit_photo_nadine_schütz_dscf2517).jpg\" alt=\"\" width=\"803\" height=\"803\" />&nbsp;<img src=\"/media/uploads/promenade_sonore_-_vent_soleil_pluie_nadine_schütz_2024_(credit_photo_nadine_schütz_dscf2543)_brighter.jpg\" alt=\"\" width=\"810\" height=\"607\" /></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">\r\n<div class=\"page\" title=\"Page 3\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>Presented by Nadine Sch&uuml;tz</span></p>\r\n<p><a href=\"https://forum.ircam.fr/profile/ns_echora/\" target=\"_blank\">Biography</a></p>\r\n<p><span></span></p>\r\n<p><span>The land sound artwork </span><span>Promenade Sonore : Vent, Soleil, Pluie </span><span>is a sonic promenade in three parts, imagined and composed by sound artist Nadine Schütz for the public space that is the Pleyel footbridge. Three sculptural sound-generating instruments were created specifically for the three supporting structures, each corresponding to a meteorological element. They create a relaxing landscape ambience that varies the perception of the site's materiality, spatiality and climate. Wind, sun and rain are the musicians. </span></p>\r\n<p><span>This land sound artwork results from a close collaboration between the sound architect and composer Nadine Schütz and the architect-engineer Marc Mimram. It proposes a new approach to art and music in public spaces by considering them an integral part of a place-making process. The three sound installations combine mechanical and electro-acoustic as well as pre-recorded and generative components that augment each other, thus exploring composition based on environmental interaction in different ways. </span></p>\r\n<p><span>Project credits:<br /> Concept, Creation-Composition, Design and AD: Nadine Schütz (((Echora))) </span></p>\r\n<p><span>Fabrication, Installation, Sound Engineering and Audio Software Development: Citynox, Music Unit (Manuel Poletti, Alexandre Chaigne), Écouter-Voir, Idéalpose. </span></p>\r\n<p><span>Public commission (client): Plaine Commune, Plaine Commune Développement Architect-engineer (bridge design): Marc Mimram Architecture Ingénerie</span></p>\r\n<p>An initiative by Plaine Commune, Territoire de la culture et de la cr&eacute;ation, placing&nbsp;art and culture at the heart of urban change in association with the city of Saint-Denis.</p>\r\n<p></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 17607,
            "forum_user": {
                "id": 17604,
                "user": 17607,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sonic_Topologies_1257_b_cutsquare_smallsmall.jpg",
                "avatar_url": "/media/cache/b4/99/b499fa45336c40f5a3857c39a793e3a0.jpg",
                "biography": "Nadine Schütz is a sound artist, architect and composer from Switzerland, based in Paris. She explores the auditory landscape like an environmental interpreter and composes by developing the acoustic qualities and ambiences of a site. Space and place become thus a creative score that informs and directs its own transformation. Her compositions, performances and scenographic sound work have been presented in Zurich, Paris, London, Venice, Naples, New York, Moscow, Tokyo and Kyoto. Within urban development projects, her interventions combine the artistic reading of a site with the concern for augmenting its acoustic comfort and identity. Through an original combination of techniques derived from bio- and psychoacoustics, music, sculpture and landscape architecture, she creates sound installations and acoustic designs that participate tangibly in users' daily experiences. Nadine holds a PhD in landscape acoustics from ETH Zurich, where she installed a new studio for the spatial simulation of sonic landscapes. She teaches at ETH Zurich and Parsons Paris and is currently a guest composer in the Acoustic-and-Cognitive-Spaces and the Perception-and-Sound-Design Teams at IRCAM-STMS.",
                "date_modified": "2024-03-21T11:01:29.312466+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 766,
                        "forum_user": 17604,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "ns_echora",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "land-sound-promenade-wind-sun-rain-musicalising-the-experience-of-an-urban-footbridge-through-environmental-instruments-by-nadine-schutz",
        "pk": 3321,
        "published": true,
        "publish_date": "2025-03-05T17:21:43+01:00"
    },
    {
        "title": "Reclaiming the Stage: Portable Immersive Audio for Live Performance by Rodrig De sa",
        "description": "360Prod builds autonomous, solar-powered 3D audio systems for immersive live shows. This talk explores how spatial sound becomes a performative tool, not just an effect. We’ll present L’Œil du Cyclone, a touring 360° concert with Zero Gr4vity, combining mobile setups, SPAT Revolution, and OSC-based control to shape sound in real time.",
        "content": "<p><strong>Abstract :</strong><br />360Prod develops autonomous, solar-powered 3D audio systems designed for touring artists and immersive stage productions. This presentation explores how spatial audio can move beyond fixed installations to become a live, expressive medium. Combining mobile octophonic setups, real-time control with SPAT Revolution, and custom OSC tools, our work enables performers to actively shape sound in space. We&rsquo;ll showcase L&rsquo;&OElig;il du Cyclone, a touring 360&deg; concert with Zero Gr4vity, where immersive sound becomes a true performative language &mdash; mobile, artistic, and audience-driven.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Bio :</strong></p>\r\n<p>Rodrig De Sa is a sound engineer and immersive audio designer based in France. He co-founded 360Prod, a company dedicated to spatial audio for live performance, bridging technical innovation with artistic creation. His work focuses on real-time 3D sound in concerts, theatre, and public installations, using custom-built tools and autonomous, eco-conscious systems.</p>\r\n<p><a href=\"https://360prod.fr\" title=\"Website of 360Prod\">360prod.fr</a></p>\r\n<p><a href=\"https://forum.ircam.fr/360prod.fr/on-tour\">Our Production ON TOUR</a></p>\r\n<p><a href=\"https://forum.ircam.fr/360prod.fr/soundarium\">Our LIVE set-up</a></p>\r\n<p><a href=\"https://forum.ircam.fr/360prod.fr/label360\">Our STUDIO set-up</a></p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0131c086ecf459e3d003f86672d6cae0.jpg\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0507b7446279e5c0407a1ae53d19bfb3.jpeg\" width=\"3843\" height=\"2882\" /></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 621,
                "name": "3daudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 126031,
            "forum_user": {
                "id": 125865,
                "user": 126031,
                "first_name": "rodrig",
                "last_name": "360Prod",
                "avatar": "https://forum.ircam.fr/media/avatars/Headshot_Rodrig_DE_SA.JPG",
                "avatar_url": "/media/cache/75/53/75532cd254ed077df4a15a0cc30e1e66.jpg",
                "biography": "Rodrig is a musician, sound engineer, and co-founder of 360Prod, a French collective dedicated to making immersive 360° sound accessible to artists and audiences alike. From studio creation to live performance, he places artistic intention at the heart of spatial audio. \nThrough 360Prod, he designs and deploys mobile, autonomous, and eco-responsible systems that bring 3D sound experiences into new spaces : from theaters to outdoor venues ; transforming how sound is perceived and shared.\n\nLet's move sound for muisc from studio to Live !!",
                "date_modified": "2026-02-21T19:49:42.319769+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1169,
                        "forum_user": 125865,
                        "date_start": "2025-07-27",
                        "date_end": "2026-07-27",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "pfiouleson3d",
            "first_name": "rodrig",
            "last_name": "360Prod",
            "bookmarks": []
        },
        "slug": "reclaiming-the-stage-portable-immersive-audio-for-live-performance",
        "pk": 3655,
        "published": true,
        "publish_date": "2025-09-01T22:54:03+02:00"
    },
    {
        "title": "Sunwin - Cổng Game Bài Đổi thưởng | Link Tải App Chính Thức",
        "description": "Sunwin nổi tiếng là điểm hẹn giải trí đổi thưởng uy tín, chất lượng và hấp dẫn hàng đầu thị trường, đứng top 1 về số lượng thành viên truy cập\nThông tin chi tiết:\nWebsite: https://sunwin20.com.im/\nĐịa chỉ: 361, Phường 14, Gò Vấp, Hồ Chí Minh, Việt Nam\nPhone: 0987.654.321\nEmail: admin@sunwin20.com.im\n#sunwin, #sunwin_game, #sunwin_cong_game,#sunwin_bet",
        "content": "<p><a href=\"https://sunwin20.com.im/\"><strong>Sunwin</strong></a><strong> </strong><span style=\"\">nổi tiếng l&agrave; điểm hẹn giải tr&iacute; đổi thưởng uy t&iacute;n, chất lượng v&agrave; hấp dẫn h&agrave;ng đầu thị trường, đứng top 1 về số lượng th&agrave;nh vi&ecirc;n truy cập</span></p>\n<p><span style=\"\">Th&ocirc;ng tin chi tiết:</span></p>\n<p><span style=\"\">Website:</span><a href=\"https://sunwin20.com.im/\"><span style=\"\"> </span><strong>https://sunwin20.com.im/</strong></a></p>\n<p><span style=\"\">Địa chỉ: 361, Phường 14, G&ograve; Vấp, Hồ Ch&iacute; Minh, Việt Nam</span></p>\n<p><span style=\"\">Phone: 0987.654.321</span></p>\n<p><span style=\"\">Email: </span><a href=\"mailto:admin@sunwin20.com.im\"><span style=\"\">admin@sunwin20.com.im</span></a></p>\n<p><span style=\"\">#sunwin, #sunwin_game, #sunwin_cong_game,#sunwin_bet</span></p>\n<p><a href=\"https://sunwin20.com.im/\"><span style=\"\">https://sunwin20.com.im/</span></a></p>\n<p><a href=\"https://www.tumblr.com/sunwin20comim\"><span style=\"\">https://www.tumblr.com/sunwin20comim</span></a></p>\n<p><a href=\"https://www.youtube.com/@sunwin20comim\"><span style=\"\">https://www.youtube.com/@sunwin20comim</span></a></p>\n<p><a href=\"https://x.com/sunwin20comim\"><span style=\"\">https://x.com/sunwin20comim</span></a></p>\n<p><a href=\"https://www.instagram.com/sunwin20comim1/\"><span style=\"\">https://www.instagram.com/sunwin20comim1/</span></a></p>\n<p><a href=\"https://gravatar.com/sunwin20comim\"><span style=\"\">https://gravatar.com/sunwin20comim</span></a></p>\n<p><a href=\"https://www.blogger.com/profile/07913867175721134319\"><span style=\"\">https://www.blogger.com/profile/07913867175721134319</span></a></p>\n<p><a href=\"https://about.me/sunwin20comim\"><span style=\"\">https://about.me/sunwin20comim</span></a></p>\n<p><a href=\"https://www.twitch.tv/sunwin20comim/about\"><span style=\"\">https://www.twitch.tv/sunwin20comim/about</span></a></p>\n<p><a href=\"https://sites.google.com/view/sunwin20comim/home\"><span style=\"\">https://sites.google.com/view/sunwin20comim/home</span></a></p>\n<p><a href=\"https://scholar.google.com/citations?user=k6mif4YAAAAJ&amp;hl=en\"><span style=\"\">https://scholar.google.com/citations?user=k6mif4YAAAAJ&amp;hl=en</span></a></p>\n<p><a href=\"https://500px.com/p/sunwin20comim\"><span style=\"\">https://500px.com/p/sunwin20comim</span></a></p>\n<p><a href=\"https://www.reddit.com/user/sunwin20comim/\"><span style=\"\">https://www.reddit.com/user/sunwin20comim/</span></a></p>\n<p><a href=\"https://sunwin20comim.blogspot.com/2026/03/sunwin.html\"><span style=\"\">https://sunwin20comim.blogspot.com/2026/03/sunwin.html</span></a></p>\n<p><a href=\"https://medium.com/@sunwin20comim\"><span style=\"\">https://medium.com/@sunwin20comim</span></a></p>\n<p><a href=\"https://vimeo.com/sunwin20comim\"><span style=\"\">https://vimeo.com/sunwin20comim</span></a></p>\n<p><a href=\"https://www.gta5-mods.com/users/sunwin20comim1\"><span style=\"\">https://www.gta5-mods.com/users/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.behance.net/sunwin20comim\"><span style=\"\">https://www.behance.net/sunwin20comim</span></a></p>\n<p><a href=\"https://www.quora.com/profile/Sunwin20comim\"><span style=\"\">https://www.quora.com/profile/Sunwin20comim</span></a></p>\n<p><a href=\"https://pubhtml5.com/homepage/yhamy/\"><span style=\"\">https://pubhtml5.com/homepage/yhamy/</span></a></p>\n<p><a href=\"https://coub.com/sunwin20comim1\"><span style=\"\">https://coub.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://leetcode.com/u/sunwin20comim1/\"><span style=\"\">https://leetcode.com/u/sunwin20comim1/</span></a></p>\n<p><a href=\"https://dlive.tv/sunwin20comim1\"><span style=\"\">https://dlive.tv/sunwin20comim1</span></a></p>\n<p><a href=\"https://jobs.nefeshinternational.org/employers/4094299-sunwin20comim\"><span style=\"\">https://jobs.nefeshinternational.org/employers/4094299-sunwin20comim</span></a></p>\n<p><a href=\"https://experiment.com/users/sunwin20comim\"><span style=\"\">https://experiment.com/users/sunwin20comim</span></a></p>\n<p><a href=\"https://wakelet.com/@sunwin20comim\"><span style=\"\">https://wakelet.com/@sunwin20comim</span></a></p>\n<p><a href=\"https://gifyu.com/sunwin20comim\"><span style=\"\">https://gifyu.com/sunwin20comim</span></a></p>\n<p><a href=\"https://www.speedrun.com/users/sunwin20comim\"><span style=\"\">https://www.speedrun.com/users/sunwin20comim</span></a></p>\n<p><a href=\"http://palangshim.com/space-uid-5079614.html\"><span style=\"\">http://palangshim.com/space-uid-5079614.html</span></a></p>\n<p><a href=\"https://qna.habr.com/user/sunwin20comim1\"><span style=\"\">https://qna.habr.com/user/sunwin20comim1</span></a></p>\n<p><a href=\"https://telegra.ph/sunwin20comim-04-04\"><span style=\"\">https://telegra.ph/sunwin20comim-04-04</span></a></p>\n<p><a href=\"https://issuu.com/sunwin20comim1\"><span style=\"\">https://issuu.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://elovebook.com/sunwin20comim1\"><span style=\"\">https://elovebook.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.wowonder.xyz/sunwin20comim\"><span style=\"\">https://www.wowonder.xyz/sunwin20comim</span></a></p>\n<p><a href=\"https://ioninja.com/forum/user/sunwin20comim\"><span style=\"\">https://ioninja.com/forum/user/sunwin20comim</span></a></p>\n<p><a href=\"https://transfur.com/Users/sunwin20comim1\"><span style=\"\">https://transfur.com/Users/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.horticulturaljobs.com/employers/4094336-sunwin20comim\"><span style=\"\">https://www.horticulturaljobs.com/employers/4094336-sunwin20comim</span></a></p>\n<p><a href=\"https://habr.com/ru/users/sunwin20comim1/\"><span style=\"\">https://habr.com/ru/users/sunwin20comim1/</span></a></p>\n<p><a href=\"https://careers.coloradopublichealth.org/profiles/8106746-sunwin\"><span style=\"\">https://careers.coloradopublichealth.org/profiles/8106746-sunwin</span></a></p>\n<p><a href=\"https://www.elektroenergetika.si/UserProfile/tabid/43/userId/1447854/Default.aspx\"><span style=\"\">https://www.elektroenergetika.si/UserProfile/tabid/43/userId/1447854/Default.aspx</span></a></p>\n<p><a href=\"http://mura.hitobashira.org/index.php?sunwin20comim1\"><span style=\"\">http://mura.hitobashira.org/index.php?sunwin20comim1</span></a></p>\n<p><a href=\"https://www.bandsworksconcerts.info/index.php?sunwin20comim\"><span style=\"\">https://www.bandsworksconcerts.info/index.php?sunwin20comim</span></a></p>\n<p><a href=\"https://chanylib.ru/ru/forum/user/22852/\"><span style=\"\">https://chanylib.ru/ru/forum/user/22852/</span></a></p>\n<p><a href=\"https://casualgamerevolution.com/user/sunwin20comim\"><span style=\"\">https://casualgamerevolution.com/user/sunwin20comim</span></a></p>\n<p><a href=\"http://fort-raevskiy.ru/community/profile/sunwin20comim1/\"><span style=\"\">http://fort-raevskiy.ru/community/profile/sunwin20comim1/</span></a></p>\n<p><a href=\"https://skeptikon.fr/a/sunwin20comim/video-channels\"><span style=\"\">https://skeptikon.fr/a/sunwin20comim/video-channels</span></a></p>\n<p><a href=\"http://gojourney.xsrv.jp/index.php?sunwin20comim\"><span style=\"\">http://gojourney.xsrv.jp/index.php?sunwin20comim</span></a></p>\n<p><a href=\"https://www.sciencebee.com.bd/qna/user/sunwin20comim\"><span style=\"\">https://www.sciencebee.com.bd/qna/user/sunwin20comim</span></a></p>\n<p><a href=\"http://arahn.100webspace.net/profile.php?mode=viewprofile&amp;u=243374\"><span style=\"\">http://arahn.100webspace.net/profile.php?mode=viewprofile&amp;u=243374</span></a></p>\n<p><a href=\"https://www.designspiration.com/sunwin20comim1/\"><span style=\"\">https://www.designspiration.com/sunwin20comim1/</span></a></p>\n<p><a href=\"https://dinosquadsuriku.com/?sunwin20comim1\"><span style=\"\">https://dinosquadsuriku.com/?sunwin20comim1</span></a></p>\n<p><a href=\"https://potofu.me/sunwin20comim1\"><span style=\"\">https://potofu.me/sunwin20comim1</span></a></p>\n<p><a href=\"https://writexo.com/share/d6a2ced9f1ef\"><span style=\"\">https://writexo.com/share/d6a2ced9f1ef</span></a></p>\n<p><a href=\"https://socialgem.net/sunwin20comim1\"><span style=\"\">https://socialgem.net/sunwin20comim1</span></a></p>\n<p><a href=\"https://forum.delftship.net/Public/users/sunwin20comim1/\"><span style=\"\">https://forum.delftship.net/Public/users/sunwin20comim1/</span></a></p>\n<p><a href=\"https://files.fm/sunwin20comim1/info\"><span style=\"\">https://files.fm/sunwin20comim1/info</span></a></p>\n<p><a href=\"http://freestyler.ws/user/644772/sunwin20comim1\"><span style=\"\">http://freestyler.ws/user/644772/sunwin20comim1</span></a></p>\n<p><a href=\"http://hiphopinferno.com/user/sunwin20comim1\"><span style=\"\">http://hiphopinferno.com/user/sunwin20comim1</span></a></p>\n<p><a href=\"https://willysforsale.com/author/sunwin20comim1/\"><span style=\"\">https://willysforsale.com/author/sunwin20comim1/</span></a></p>\n<p><a href=\"https://zenwriting.net/i64a2v3a6r\"><span style=\"\">https://zenwriting.net/i64a2v3a6r</span></a></p>\n<p><a href=\"https://postheaven.net/sunwin20comim1/sunwin20comim1\"><span style=\"\">https://postheaven.net/sunwin20comim1/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.instapaper.com/p/sunwin20comim1\"><span style=\"\">https://www.instapaper.com/p/sunwin20comim1</span></a></p>\n<p><a href=\"https://writeablog.net/sunwin20comim1/sunwin20comim1\"><span style=\"\">https://writeablog.net/sunwin20comim1/sunwin20comim1</span></a></p>\n<p><a href=\"https://fora.babinet.cz/profile.php?section=personal&amp;id=120632\"><span style=\"\">https://fora.babinet.cz/profile.php?section=personal&amp;id=120632</span></a></p>\n<p><a href=\"http://dtan.thaiembassy.de/uncategorized/2562/?mingleforumaction=profile&amp;id=485323\"><span style=\"\">http://dtan.thaiembassy.de/uncategorized/2562/?mingleforumaction=profile&amp;id=485323</span></a></p>\n<p><a href=\"http://forum.cncprovn.com/members/421419-sunwin20comim1\"><span style=\"\">http://forum.cncprovn.com/members/421419-sunwin20comim1</span></a></p>\n<p><a href=\"https://www.xibeiwujin.com/home.php?mod=space&amp;uid=2310977&amp;do=profile&amp;from=space\"><span style=\"\">https://www.xibeiwujin.com/home.php?mod=space&amp;uid=2310977&amp;do=profile&amp;from=space</span></a></p>\n<p><a href=\"https://pantip.com/profile/9314724\"><span style=\"\">https://pantip.com/profile/9314724</span></a></p>\n<p><a href=\"http://iawbs.com/home.php?mod=space&amp;uid=951387\"><span style=\"\">http://iawbs.com/home.php?mod=space&amp;uid=951387</span></a></p>\n<p><a href=\"https://www.goldposter.com/members/sunwin20comim1/profile/\"><span style=\"\">https://www.goldposter.com/members/sunwin20comim1/profile/</span></a></p>\n<p><a href=\"https://www.hulkshare.com/sunwin20comim1\"><span style=\"\">https://www.hulkshare.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://zb3.org/sunwin20comim1/sunwin20comim1\"><span style=\"\">https://zb3.org/sunwin20comim1/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.party.biz/index.php/profile/379249?tab=541\"><span style=\"\">https://www.party.biz/index.php/profile/379249?tab=541</span></a></p>\n<p><a href=\"https://wirtube.de/a/sunwin20comim1/video-channels\"><span style=\"\">https://wirtube.de/a/sunwin20comim1/video-channels</span></a></p>\n<p><a href=\"https://propterest.com.au/user/78611/sunwin20comim1\"><span style=\"\">https://propterest.com.au/user/78611/sunwin20comim1</span></a></p>\n<p><a href=\"https://photouploads.com/sunwin20comim1\"><span style=\"\">https://photouploads.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.experts123.com/portal/u/sunwin20comim1\"><span style=\"\">https://www.experts123.com/portal/u/sunwin20comim1</span></a></p>\n<p><a href=\"https://all4.vip/p/page/view-persons-profile?id=120042\"><span style=\"\">https://all4.vip/p/page/view-persons-profile?id=120042</span></a></p>\n<p><a href=\"https://paste.lightcast.com/view/37bbf965\"><span style=\"\">https://paste.lightcast.com/view/37bbf965</span></a></p>\n<p><a href=\"https://its-my.link/@sunwin20comim1\"><span style=\"\">https://its-my.link/@sunwin20comim1</span></a></p>\n<p><a href=\"https://filesharingtalk.com/members/634888-sunwin20comim1\"><span style=\"\">https://filesharingtalk.com/members/634888-sunwin20comim1</span></a></p>\n<p><a href=\"https://sunwin20comim1.newgrounds.com/\"><span style=\"\">https://sunwin20comim1.newgrounds.com/</span></a></p>\n<p><a href=\"https://biiut.com/sunwin20comim1\"><span style=\"\">https://biiut.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.leenkup.com/sunwin20comim1\"><span style=\"\">https://www.leenkup.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://network.musicdiffusion.com/sunwin20comim1\"><span style=\"\">https://network.musicdiffusion.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://onespotsocial.com/sunwin20comim1\"><span style=\"\">https://onespotsocial.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://graph.org/sunwin20comim1-04-04\"><span style=\"\">https://graph.org/sunwin20comim1-04-04</span></a></p>\n<p><a href=\"https://challonge.com/sunwin20comim1\"><span style=\"\">https://challonge.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.fanfiction.net/~sunwin20comim1\"><span style=\"\">https://www.fanfiction.net/~sunwin20comim1</span></a></p>\n<p><a href=\"https://te.legra.ph/sunwin20comim1-04-04-2\"><span style=\"\">https://te.legra.ph/sunwin20comim1-04-04-2</span></a></p>\n<p><a href=\"https://youslade.com/sunwin20comim1\"><span style=\"\">https://youslade.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://devpost.com/sunwin20comim1\"><span style=\"\">https://devpost.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.99freelas.com.br/user/sunwin20comim1\"><span style=\"\">https://www.99freelas.com.br/user/sunwin20comim1</span></a></p>\n<p><a href=\"https://scam.vn/check-website/https://sunwin20.com.im/\"><span style=\"\">https://scam.vn/check-website/https://sunwin20.com.im/</span></a></p>\n<p><a href=\"https://www.automotiveforums.com/vbulletin/member.php?u=1100562\"><span style=\"\">https://www.automotiveforums.com/vbulletin/member.php?u=1100562</span></a></p>\n<p><a href=\"https://racetime.gg/team/sunwin20comim1\"><span style=\"\">https://racetime.gg/team/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.davidrio.com/profile/sunwin20comim1/profile\"><span style=\"\">https://www.davidrio.com/profile/sunwin20comim1/profile</span></a></p>\n<p><a href=\"https://infogram.com/sunwin20comim1-1h9j6q7o5dpe54g\"><span style=\"\">https://infogram.com/sunwin20comim1-1h9j6q7o5dpe54g</span></a></p>\n<p><a href=\"https://forums.sonicretro.org/members/sunwin20comim1.71873/\"><span style=\"\">https://forums.sonicretro.org/members/sunwin20comim1.71873/</span></a></p>\n<p><a href=\"https://beatsaver.com/playlists/1169475\"><span style=\"\">https://beatsaver.com/playlists/1169475</span></a></p>\n<p><a href=\"https://shhhnewcastleswingers.club/forums/users/sunwin20comim1/\"><span style=\"\">https://shhhnewcastleswingers.club/forums/users/sunwin20comim1/</span></a></p>\n<p><a href=\"https://sunwin20comim1.blogocial.com/sunwin-76677865\"><span style=\"\">https://sunwin20comim1.blogocial.com/sunwin-76677865</span></a></p>\n<p><a href=\"https://sunwin20comim1.thezenweb.com/sunwin-79335648\"><span style=\"\">https://sunwin20comim1.thezenweb.com/sunwin-79335648</span></a></p>\n<p><a href=\"https://sunwin20comim1.pages10.com/sunwin-76082943\"><span style=\"\">https://sunwin20comim1.pages10.com/sunwin-76082943</span></a></p>\n<p><a href=\"https://sunwin20comim1.luwebs.com/41542873/sunwin\"><span style=\"\">https://sunwin20comim1.luwebs.com/41542873/sunwin</span></a></p>\n<p><a href=\"https://sunwin20comim1.webbuzzfeed.com/41019383/sunwin\"><span style=\"\">https://sunwin20comim1.webbuzzfeed.com/41019383/sunwin</span></a></p>\n<p><a href=\"https://amvnews.ru/forum/profile.php?mode=viewprofile&amp;u=103733\"><span style=\"\">https://amvnews.ru/forum/profile.php?mode=viewprofile&amp;u=103733</span></a></p>\n<p><a href=\"https://sunwin20comim1.gumroad.com/\"><span style=\"\">https://sunwin20comim1.gumroad.com/</span></a></p>\n<p><a href=\"https://fragbite.se/user/344013/sunwin20comim1\"><span style=\"\">https://fragbite.se/user/344013/sunwin20comim1</span></a></p>\n<p><a href=\"https://failiem.lv/sunwin20comim1/info\"><span style=\"\">https://failiem.lv/sunwin20comim1/info</span></a></p>\n<p><a href=\"https://blueprintue.com/profile/sunwin20comim1/\"><span style=\"\">https://blueprintue.com/profile/sunwin20comim1/</span></a></p>\n<p><a href=\"https://tuscl.net/member/882247\"><span style=\"\">https://tuscl.net/member/882247</span></a></p>\n<p><a href=\"https://imgcredit.xyz/sunwin20comim1\"><span style=\"\">https://imgcredit.xyz/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.mshowto.org/forum/members/sunwin20comim1.html\"><span style=\"\">https://www.mshowto.org/forum/members/sunwin20comim1.html</span></a></p>\n<p><a href=\"https://www.lola.vn/u/sunwinocomim\"><span style=\"\">https://www.lola.vn/u/sunwinocomim</span></a></p>\n<p><a href=\"https://boinc.berkeley.edu/central/show_user.php?userid=23752\"><span style=\"\">https://boinc.berkeley.edu/central/show_user.php?userid=23752</span></a></p>\n<p><a href=\"https://www.iniuria.us/forum/member.php?669468-sunwin20comim1\"><span style=\"\">https://www.iniuria.us/forum/member.php?669468-sunwin20comim1</span></a></p>\n<p><a href=\"https://mforum.cari.com.my/home.php?mod=space&amp;uid=3393700&amp;do=profile\"><span style=\"\">https://mforum.cari.com.my/home.php?mod=space&amp;uid=3393700&amp;do=profile</span></a></p>\n<p><a href=\"https://wefunder.com/sunwin20comim1\"><span style=\"\">https://wefunder.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://golosknig.com/profile/sunwin20comim1/\"><span style=\"\">https://golosknig.com/profile/sunwin20comim1/</span></a></p>\n<p><a href=\"https://doodleordie.com/profile/sunwin20comim1\"><span style=\"\">https://doodleordie.com/profile/sunwin20comim1</span></a></p>\n<p><a href=\"https://roomstyler.com/users/sunwin20comim1\"><span style=\"\">https://roomstyler.com/users/sunwin20comim1</span></a></p>\n<p><a href=\"https://securityheaders.com/?q=https%3A%2F%2Fsunwin20.com.im%2F\"><span style=\"\">https://securityheaders.com/?q=https%3A%2F%2Fsunwin20.com.im%2F</span></a></p>\n<p><a href=\"https://jobs.lajobsportal.org/profiles/8107045-sunwin\"><span style=\"\">https://jobs.lajobsportal.org/profiles/8107045-sunwin</span></a></p>\n<p><a href=\"https://replit.com/@sunwin20comim1\"><span style=\"\">https://replit.com/@sunwin20comim1</span></a></p>\n<p><a href=\"https://doselect.com/@462c3376553e536007ca73b39\"><span style=\"\">https://doselect.com/@462c3376553e536007ca73b39</span></a></p>\n<p><a href=\"https://secondstreet.ru/profile/sunwin20comim1/\"><span style=\"\">https://secondstreet.ru/profile/sunwin20comim1/</span></a></p>\n<p><a href=\"https://nhattao.com/members/user6945412.6945412/\"><span style=\"\">https://nhattao.com/members/user6945412.6945412/</span></a></p>\n<p><a href=\"https://community.m5stack.com/user/sunwin20comim1\"><span style=\"\">https://community.m5stack.com/user/sunwin20comim1</span></a></p>\n<p><a href=\"https://demo.wowonder.com/sunwin20comim1\"><span style=\"\">https://demo.wowonder.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://volleypedia.org/index.php?qa=user&amp;qa_1=sunwin20comim1\"><span style=\"\">https://volleypedia.org/index.php?qa=user&amp;qa_1=sunwin20comim1</span></a></p>\n<p><a href=\"https://findaspring.org/members/sunwin20comim1/\"><span style=\"\">https://findaspring.org/members/sunwin20comim1/</span></a></p>\n<p><a href=\"https://formulamasa.com/elearning/members/sunwin20comim1/?v=96b62e1dce57\"><span style=\"\">https://formulamasa.com/elearning/members/sunwin20comim1/?v=96b62e1dce57</span></a></p>\n<p><a href=\"https://qiita.com/sunwin20comim1\"><span style=\"\">https://qiita.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.vid419.com/home.php?mod=space&amp;uid=3483068\"><span style=\"\">https://www.vid419.com/home.php?mod=space&amp;uid=3483068</span></a></p>\n<p><a href=\"https://www.play56.net/home.php?mod=space&amp;uid=6094825\"><span style=\"\">https://www.play56.net/home.php?mod=space&amp;uid=6094825</span></a></p>\n<p><a href=\"https://lamsn.com/home.php?mod=space&amp;uid=1927929\"><span style=\"\">https://lamsn.com/home.php?mod=space&amp;uid=1927929</span></a></p>\n<p><a href=\"https://protocol.ooo/ja/users/sunwin20comim1\"><span style=\"\">https://protocol.ooo/ja/users/sunwin20comim1</span></a></p>\n<p><a href=\"https://truckymods.io/user/479330\"><span style=\"\">https://truckymods.io/user/479330</span></a></p>\n<p><a href=\"https://mygamedb.com/profile/sunwin20comim1\"><span style=\"\">https://mygamedb.com/profile/sunwin20comim1</span></a></p>\n<p><a href=\"https://www.claimajob.com/profiles/8107183-sunwin\"><span style=\"\">https://www.claimajob.com/profiles/8107183-sunwin</span></a></p>\n<p><a href=\"https://www.facekindle.com/sunwin20comim1\"><span style=\"\">https://www.facekindle.com/sunwin20comim1</span></a></p>\n<p><a href=\"https://noti.st/sunwin20comim1\"><span style=\"\">https://noti.st/sunwin20comim1</span></a></p>\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 166623,
            "forum_user": {
                "id": 166386,
                "user": 166623,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/462c3376553e536007ca73b39853442d?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-04T19:06:59.116523+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sunwin20comim1",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sunwin-cong-game-bai-oi-thuong-link-tai-app-chinh-thuc",
        "pk": 4593,
        "published": false,
        "publish_date": "2026-04-04T19:14:27.941846+02:00"
    },
    {
        "title": "Collective Individuation in Virtual Spaces: AI-driven co-creativity and Telematics for Inclusive Music-Making by Hans Kretz",
        "description": "This practice-based research seeks to question how the meaning of the prefix \"co\" in \"co-creativity\" can be fully understood. This question is central to artistic and educational communities employing AI in co-creative practices, and will be addressed in this research project, which employs AI and remote collaboration technologies to enhance accessibility in collective music- making, through a methodology integrating principles of co-design, universal design and collaborative co- creation.",
        "content": "<p><span>By applying philosophical and aesthetic frameworks, it investigates how the implementation of distributive, inclusive and de-hierarchizing technologies can challenge Western traditions of authorship and expand our understanding of collaborative music-making and knowledge production. &nbsp;I will draw on the use of software such as Somax2 and Dicy2 in ensemble practice to highlight how a 'sound object' becomes an 'object of thought,' inherently involving knowledge production about it.&nbsp;</span></p>",
        "topics": [
            {
                "id": 2353,
                "name": "co-creativity/ generative AI/ accessibility/ telematics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 24769,
            "forum_user": {
                "id": 24742,
                "user": 24769,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/kretz_photo_florence.jpeg",
                "avatar_url": "/media/cache/32/9c/329c88320f1a66b9f139ae07d4bbffc7.jpg",
                "biography": "Hans Kretz is a conductor, pianist, researcher and author. He holds PhDs in Music and Philosophy from the University of Leeds and the University of Paris 8 Vincennes-Saint-Denis respectively. His research interests include philosophy of culture, aesthetics, philosophical anthropology and philosophy of technology. His writings have appeared in the Recherches d'Esthétique Transculturelle series of L'Harmattan, and in the Cahiers Critiques de Philosophie. He is a Lecturer at Stanford University, where he currently conducts and directs the Stanford New Ensemble.",
                "date_modified": "2025-12-28T14:44:33.622746+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 979,
                        "forum_user": 24742,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "hkretz",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "collective-individuation-in-virtual-spaces-ai-driven-co-creativity-and-telematics-for-inclusive-music-making",
        "pk": 3081,
        "published": true,
        "publish_date": "2024-10-28T03:24:11+01:00"
    },
    {
        "title": "DE-EXTINCTION DREAM PALACE - Amy CUTLER",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>\"DE-EXTINCTION DREAM PALACE will be a prototype from my residency on species revival as augmented reality experience, supported by the Leverhulme Centre for Anthropocene Biodiversity (LCAB). Using hypothetical archival sounds &ndash; such as the recent re-creation of the theoretical bioacoustics of Prophalangopsis obscura, an extinct insect species alive at the time of the dinosaurs &ndash; the project draws on &lsquo;virtuality&rsquo; in both its sonic and visual materials. Spatially, the work is designed as a configuration of two screens (left and right) and two speakers (left and right) facing each other. Drawing on David Jaclin&rsquo;s concept of de-extinction as both &lsquo;dreaming&rsquo; and &lsquo;cinematic chimera&rsquo; &ndash; and on cinema&rsquo;s own cultural spatial history as &lsquo;dream palace&rsquo; &ndash; I will explore ways of designing an experience related to the virtual past (left) and the virtual future (right).&nbsp; The ordinary ways of designing and experiencing cinema &ndash; both sonic and visual &ndash; struggle to express the simultaneous disappearances-and-future-propositions of a being undergoing de-extinction, focusing instead on the centred experience of &lsquo;now&rsquo;. But between the fossil bone, the coded information, and the resurrected individual &ndash; where does the animal stand in this configuration? Drawing on both the spatial metaphor of left-to-right timelines, and disorienting audio-visual techniques linked to cues of flashback and flashforward, this work aims to spatialise the simultaneous loss-and-revival of virtual lives.</p>\r\n<p>Rather than creating the full &lsquo;multiplex&rsquo; installation, this can be delivered as a talk with a brief demonstration as part of the presentation.&nbsp;\"</p>",
        "topics": [],
        "user": {
            "pk": 27329,
            "forum_user": {
                "id": 27301,
                "user": 27329,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b3bd8482920a84260923a5c9850f4e3c?s=120&d=retro",
                "biography": "Dr. Amy Cutler is an audio-visual artist and experimental designer who works with ideas of species, spaces, senses, atmosphere, and audiences. In 2022 she was awarded the Daphne Oram award for women and minority gender people who are innovating in music and sound. Drawing on a training in geography, she works frequently on the production of immersive, augmented, interactive, and live cinema and AV installation events, provoking and changing the public conversation around ideas of space, geography, and nature-cultures.  Recent projects include her outdoor solar cinema created in the Vall de Gallinera for the international Enclave Land Art Residency (2022), her joint audio-visual residency with Ella Finer at KELDER gallery, Experiments in Company: Outside the Safe Operating Space of a New Planetary Boundary (2022), her live 360° cinema show co-composed with atmospheric and audience sensors for Shangri-La at Glastonbury Festival (2022), and her collaboration with Experimenting, Experiencing, Reflecting (EER), led by artist Olafur Eliasson and scientist Andreas Roepstorff.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "amycutler",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "de-extinction-dream-palace",
        "pk": 2083,
        "published": true,
        "publish_date": "2023-02-24T17:23:51+01:00"
    },
    {
        "title": "OpenMusic NEWS - Karim Haddad, Carlos Agon",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p><span>&Eacute;quipe RepMus - (F)O(R)M</span><br /><br /><span>Pr&eacute;sentation des nouveaut&eacute;s concernant l'environnement OpenMusic 7.2 et notamment l'int&eacute;gration de fluidsynth. Il sera aussi pr&eacute;sent&eacute; les prosp&eacute;ctives de d&eacute;veloppement futur de OM. Nous presentons enfin le projet exp&eacute;rimental d'Openmusic sur python. </span></p>",
        "topics": [],
        "user": {
            "pk": 14,
            "forum_user": {
                "id": 14,
                "user": 14,
                "first_name": "Karim",
                "last_name": "Haddad",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1f556229c0742ef0586dd43d312f81a4?s=120&d=retro",
                "biography": "Karim Haddad was born in 1962 in Beirut Lebanon. He studied at the national conservatory there until it closed its doors in 1975 due to the civil war. He then went on to study philosophy and literature. Haddad received six awards from the CNSMD de Paris in addition to the Diplôme Supérieur de Composition with honors. He has worked with composers such as A.Bancquart, P. Mefano, K. Huber, and Emmanuel Nunes. This learning period is marked by his keen interest for non-tempered spaces and their strong relationship with temporal poetry. In 1992 and 1994 he took part in Ferienkursen für Musik in Darmstadt where he received a scholarship. In 1995, he took a class in computer music at IRCAM, and from that point on, the computer became the only tool he used for the elaboration of his works.\r\n\r\nAs a computer music expert, and more particularly as an expert in computer-assisted composition, in 2000 he is given the responsibility of technical support for the IRCAM Forum. He has developed several tools for the OpenMusic environment (synthesis control via Csound), as well as interfaces between this environment and score editors such as Finale and Lilypond.",
                "date_modified": "2026-02-18T11:08:17.096351+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 3,
                        "forum_user": 14,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 0,
                        "keys": [
                            {
                                "id": 544,
                                "membership": 3
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "haddad",
            "first_name": "Karim",
            "last_name": "Haddad",
            "bookmarks": []
        },
        "slug": "openmusic-news-karim-haddad-carlos-agon",
        "pk": 2135,
        "published": true,
        "publish_date": "2023-03-14T14:20:50+01:00"
    },
    {
        "title": "An Adaptive Acoustic Software for Instrumental Music which can be tangibly used for Music Hardware, Products & Accessories by Arnab Dalal",
        "description": "This Projects presents an Adaptive Psychoacoustic Model designed to process and tune audio data for high-fidelity instrumental music, which contains no lyrical attributes. The approach includes: 1. Audio Extraction 2. EQ Techniques and Psychoacoustic Models 3. Adaptive Audio Codec with AI Integration",
        "content": "<h2><strong>An Adaptive Acoustic Software for Instrumental Music which can be tangibly used for Music Hardware, Products &amp; Accessories </strong></h2>\r\n<h2><strong>- Arnab Dalal</strong></h2>\r\n<p><strong>Introduction:</strong></p>\r\n<p>It's no wonder that today, nearly <strong>60-70%</strong> of the music humanity has ever created and experienced falls under the instrumental category. Despite this there is <strong>no dedicated acoustic software</strong> or codec designed to enhance the <strong>instrumental listening experience.&nbsp;</strong></p>\r\n<p>As music evolves, we're seeing a shift towards a deeper appreciation of Instrumental Sound. Whether it's the soothing rhythms of wellness music or pulsating beats of hard techno, it's no doubt that instrumental music plays a pivotal role. It allows us to experience the raw essence of sound--uninterrupted by lyrics. Lyrics can sometimes impose the songwriter's emotions and narrative onto the listener. Instrumental music on the other hand gives space for personal interpretation, inviting the listener to connect with music in it's purest form.&nbsp;</p>\r\n<p>Studies show that listening to instrumental music can enhance congnitve function, creativity and focus. There is also evidence that professional pianists are much better than non-musicians at discriminating two closely separated points, perhaps from years of sight reading. They also improved faster with practice, suggesting that <strong>music makes brains more plastic</strong> in general. <strong>Learn an instrument, then, and it might get easier to learn everything else</strong>.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4fecd70a4a8b952d8954022c1aaea514.jpeg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/33e3cd71652c8170a039eedac921edec.png\" /></p>\r\n<p><strong>That's what we're here to change: Our Research and Approach:</strong></p>\r\n<p>At present, we're developing a codec that optimises the<strong> 2-4 kHz range</strong>--the sweet spot for human voice frequency spectrum--but reimagined for instrumental music. Our goal is to enrich this range, giving listeners a more immersive and refined auditory experience.&nbsp;We've mapped out the key frequency behaviours and analysed how timbre and harmonics contribute of Instrumental Sound. Here's an overview of our process\"&nbsp;</p>\r\n<p><strong>1. Step One: Signal Analysis</strong></p>\r\n<p>We start by analysing the audio data. This allows us to tailor the listening experience to optimise the specific characteristics of music. (References Below)</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/cfefac63a1afa67c80b08be1072d8bc7.png\" /></p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3834dac5344ad7e086a9fc1406c4175a.png\" /></p>\r\n<p><strong>2. Step Two: Properitary Proessing Algorithms</strong></p>\r\n<p>Using our provisional-patented algorithms, we apply cutting-edge processing techniques to optimise the voice frequency range and elevate the listener's experience of instrumental sound.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5afc05042f2fa2d3a2d76db52253a722.png\" /></p>\r\n<p><strong>3. Step Three: AI Integration</strong></p>\r\n<p>Finally, we incorporate AI to refine the sound. Since not all music is the same, this step allows us to fine tune the audio data and make adjustments to each indiviual track.&nbsp;</p>\r\n<p><strong>Conclusion:</strong></p>\r\n<p>Our approach has a special emphasis on music creators, including Artists, Collaborators, Sound Designers and Record Labels working with genres like Ambient, Classical, Orchestral Music, Film Scores, and a vast array of Experimental Music.&nbsp;</p>\r\n<p>Let's experience together how different genres-whether it's ambient, electroacoustic or even DIY instruments--respond to these new enchancements. Let's make this a conversation, not just an article or presentation !</p>",
        "topics": [
            {
                "id": 458,
                "name": "Ambient",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2323,
                "name": "Audio Codec",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2324,
                "name": "Audio Extraction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2321,
                "name": "digital signal processing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2322,
                "name": "Psychoacoustic Model",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18336,
            "forum_user": {
                "id": 18329,
                "user": 18336,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-10-10_at_16.42.11.jpeg",
                "avatar_url": "/media/cache/46/33/463303acd1b38c3ab3ac694a940d675c.jpg",
                "biography": "I'm the Founder/Director of RESET NETWORKS (OPC) PRIVATE LIMITED. We're an experimental culture driven brand with our goal to constantly drive innovation and inspire, thus helping to lead and define the progression of electronic music culture. As a startup recognised & certified under the #startupindia scheme, RESET is an early stage platform for new developments in Music Technology.",
                "date_modified": "2026-01-03T14:56:10.899336+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 978,
                        "forum_user": 18329,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "dalalarnab93",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "an-adaptive-acoustic-software-for-instrumental-music-which-can-be-tangibly-used-for-music-hardware-products-accessories",
        "pk": 3046,
        "published": true,
        "publish_date": "2024-10-21T19:56:22+02:00"
    },
    {
        "title": "To The Lighthouse Program Note",
        "description": "Program Note",
        "content": "<p>&ldquo;She felt... how life, from being made up of little separate incidents which one lived one by one, became curled and whole like a wave which bore one up with it and threw one down with it, there, with a dash on the beach.&rdquo;</p>\r\n<p>- Virginia Woolf, <em>To The Lighthouse</em></p>",
        "topics": [],
        "user": {
            "pk": 31261,
            "forum_user": {
                "id": 31214,
                "user": 31261,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/pro_portrait_bw.jpg",
                "avatar_url": "/media/cache/50/59/50593af32e6f8eaa2e53ad25147682ca.jpg",
                "biography": "Anuj Bhutani is a quickly emerging composer. Since 2020, he’s won an ASCAP Morton Gould Young Composer Award, 1st prize in Cerddorion Vocal Ensemble’s Emerging Composer Competition, Verdigris Ensemble’s ION Composer Competition, 3rd prize in the American Prize in Choral Composition and The Choral Project's Composition Competition, and was a Finalist in the RED NOTE Composition Competition. His work has been selected for the NewAm Composer’s Lab, Norfolk Chamber Music Festival,  RED NOTE Festival Composition Workshop (2021), among others. His music has been commissioned or performed by Ashley Bathgate, Raleigh Civic Symphony, Metropolis Ensemble, Verdant Vibes, Andrew Tholl of Wild Up, Lauren Cauley Kalal of Switch~ ensemble, the WPU Percussion Ensemble, and more. \nHe currently attends University of Southern California (MM Composition) and previously attended University of North Texas (BM Composition). His primary teachers have included Joseph Klein, Andrew May, Sungji Hong, Drew Schnurr, and Bruce Broughton.",
                "date_modified": "2022-08-26T17:13:50+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "anujbhutani",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "to-the-lighthouse-program-note",
        "pk": 1326,
        "published": true,
        "publish_date": "2022-09-12T19:22:03+02:00"
    },
    {
        "title": "Sunwin Club - Cổng Game Đổi Thưởng Uy Tín 2026",
        "description": "Sunwin Club là cổng game đổi thưởng uy tín với nhiều trò chơi hấp dẫn như nổ hũ, bắn cá, casino trực tuyến, trải nghiệm mượt mà\nThông tin chi tiết:\nWebsite: https://sun-winclub.org\nĐịa chỉ: 66 Ng. 117 Đ. Trần Cung, Cổ Nhuế, Nghĩa Đô, Hà Nội, Việt Nam\nEmail: sun-winclub.org@gmail.com\nPhone: 0972948194\n#Sunwin, #Sunwin_game, #Sunwin_cong_game, #Sunwin_casino",
        "content": "<p><a href=\"https://sun-winclub.org/\"><span style=\"\">Sunwin</span></a><span style=\"\"> Club l&agrave; cổng game đổi thưởng uy t&iacute;n với nhiều tr&ograve; chơi hấp dẫn như nổ hũ, bắn c&aacute;, casino trực tuyến, trải nghiệm mượt m&agrave;</span></p>\n<p><span style=\"\">Th&ocirc;ng tin chi tiết:</span></p>\n<p><span style=\"\">Website:</span><a href=\"https://sun-winclub.org/\"><span style=\"\"> </span><span style=\"\">https://sun-winclub.org</span></a></p>\n<p><span style=\"\">Địa chỉ: 66 Ng. 117 Đ. Trần Cung, Cổ Nhuế, Nghĩa Đ&ocirc;, H&agrave; Nội, Việt Nam</span></p>\n<p><span style=\"\">Email: </span><a href=\"mailto:sun-winclub.org@gmail.com\"><span style=\"\">sun-winclub.org@gmail.com</span></a></p>\n<p><span style=\"\">Phone: 0972948194</span></p>\n<p><span style=\"\">#Sunwin, #Sunwin_game, #Sunwin_cong_game, #Sunwin_casino</span></p>\n<p><a href=\"https://sun-winclub.org\"><span style=\"\">https://sun-winclub.org</span></a></p>\n<p><a href=\"https://www.pinterest.com/sunwincluborg1/\"><span style=\"\">https://www.pinterest.com/sunwincluborg1/</span></a></p>\n<p><a href=\"https://www.tumblr.com/sunwincluborg11\"><span style=\"\">https://www.tumblr.com/sunwincluborg11</span></a></p>\n<p><a href=\"https://www.twitch.tv/sunwincluborg11/about\"><span style=\"\">https://www.twitch.tv/sunwincluborg11/about</span></a></p>\n<p><a href=\"https://www.reddit.com/user/sunwincluborg11/\"><span style=\"\">https://www.reddit.com/user/sunwincluborg11/</span></a></p>\n<p><a href=\"https://gravatar.com/sunwincluborg11\"><span style=\"\">https://gravatar.com/sunwincluborg11</span></a></p>\n<p><a href=\"https://500px.com/p/sunwincluborg11\"><span style=\"\">https://500px.com/p/sunwincluborg11</span></a></p>\n<p><a href=\"https://www.behance.net/sunwincluborg1\"><span style=\"\">https://www.behance.net/sunwincluborg1</span></a></p>\n<p><a href=\"https://www.quora.com/profile/Sunwincluborg1\"><span style=\"\">https://www.quora.com/profile/Sunwincluborg1</span></a></p>\n<p><a href=\"https://www.gta5-mods.com/users/sunwincluborg11\"><span style=\"\">https://www.gta5-mods.com/users/sunwincluborg11</span></a></p>\n<p><a href=\"https://pubhtml5.com/homepage/bnfku/\"><span style=\"\">https://pubhtml5.com/homepage/bnfku/</span></a></p>\n<p><a href=\"https://coub.com/sunwincluborg11\"><span style=\"\">https://coub.com/sunwincluborg11</span></a></p>\n<p><a href=\"https://qna.habr.com/user/sunwincluborg1\"><span style=\"\">https://qna.habr.com/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://findaspring.org/members/sunwincluborg1/\"><span style=\"\">https://findaspring.org/members/sunwincluborg1/</span></a></p>\n<p><a href=\"https://www.inkitt.com/sunwincluborg1\"><span style=\"\">https://www.inkitt.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://pledgeme.co.nz/profiles/326134\"><span style=\"\">https://pledgeme.co.nz/profiles/326134</span></a></p>\n<p><a href=\"https://www.minecraft-servers-list.org/details/sunwincluborg1/\"><span style=\"\">https://www.minecraft-servers-list.org/details/sunwincluborg1/</span></a></p>\n<p><a href=\"https://www.iniuria.us/forum/member.php?667956-sunwinclubor\"><span style=\"\">https://www.iniuria.us/forum/member.php?667956-sunwinclubor</span></a></p>\n<p><a href=\"https://b.cari.com.my/home.php?mod=space&amp;uid=3392116&amp;do=profile\"><span style=\"\">https://b.cari.com.my/home.php?mod=space&amp;uid=3392116&amp;do=profile</span></a></p>\n<p><a href=\"https://doodleordie.com/profile/sunwincluborg12\"><span style=\"\">https://doodleordie.com/profile/sunwincluborg12</span></a></p>\n<p><a href=\"https://golosknig.com/profile/sunwincluborg/\"><span style=\"\">https://golosknig.com/profile/sunwincluborg/</span></a></p>\n<p><a href=\"https://jobs.lajobsportal.org/profiles/8085370-sunwin-club-c-ng-game-d-i-th-ng\"><span style=\"\">https://jobs.lajobsportal.org/profiles/8085370-sunwin-club-c-ng-game-d-i-th-ng</span></a></p>\n<p><a href=\"https://wefunder.com/sunwinclubcnggameithng\"><span style=\"\">https://wefunder.com/sunwinclubcnggameithng</span></a></p>\n<p><a href=\"https://replit.com/@sunwincluborg1\"><span style=\"\">https://replit.com/@sunwincluborg1</span></a></p>\n<p><a href=\"https://ketcau.com/member/126184-sunwinclub1/visitormessage/314234-visitor-message-from-sunwinclub1#post314234\"><span style=\"\">https://ketcau.com/member/126184-sunwinclub1/visitormessage/314234-visitor-message-from-sunwinclub1#post314234</span></a></p>\n<p><a href=\"https://secondstreet.ru/profile/sunwincluborg1/\"><span style=\"\">https://secondstreet.ru/profile/sunwincluborg1/</span></a></p>\n<p><a href=\"https://nhattao.com/members/sunwincluborg1.6942182/\"><span style=\"\">https://nhattao.com/members/sunwincluborg1.6942182/</span></a></p>\n<p><a href=\"https://community.m5stack.com/user/sunwincluborg1\"><span style=\"\">https://community.m5stack.com/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://jobs.windomnews.com/profiles/8085412-sunwin-club-c-ng-game-d-i-th-ng\"><span style=\"\">https://jobs.windomnews.com/profiles/8085412-sunwin-club-c-ng-game-d-i-th-ng</span></a></p>\n<p><a href=\"https://www.scener.com/@sunwincluborg1\"><span style=\"\">https://www.scener.com/@sunwincluborg1</span></a></p>\n<p><a href=\"https://demo.wowonder.com/sunwincluborg1\"><span style=\"\">https://demo.wowonder.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://www.floodzonebrewery.com/profile/cosgriffmeiser6224242/profile\"><span style=\"\">https://www.floodzonebrewery.com/profile/cosgriffmeiser6224242/profile</span></a></p>\n<p><a href=\"https://volleypedia.org/index.php?qa=user&amp;qa_1=sunwincluborg1\"><span style=\"\">https://volleypedia.org/index.php?qa=user&amp;qa_1=sunwincluborg1</span></a></p>\n<p><a href=\"https://eo-college.org/members/sunwincluborg1/\"><span style=\"\">https://eo-college.org/members/sunwincluborg1/</span></a></p>\n<p><a href=\"https://qiita.com/sunwincluborg12\"><span style=\"\">https://qiita.com/sunwincluborg12</span></a></p>\n<p><a href=\"https://www.brownbook.net/business/54961284/sunwincluborg1\"><span style=\"\">https://www.brownbook.net/business/54961284/sunwincluborg1</span></a></p>\n<p><a href=\"https://www.plotterusati.it/user/sunwin-club-cong-game-doi-thuong\"><span style=\"\">https://www.plotterusati.it/user/sunwin-club-cong-game-doi-thuong</span></a></p>\n<p><a href=\"https://makeagif.com/user/sunwincluborg1?ref=Nb07Rw\"><span style=\"\">https://makeagif.com/user/sunwincluborg1?ref=Nb07Rw</span></a></p>\n<p><a href=\"http://galeria.farvista.net/member.php?action=showprofile&amp;user_id=74670\"><span style=\"\">http://galeria.farvista.net/member.php?action=showprofile&amp;user_id=74670</span></a></p>\n<p><a href=\"https://www.fitlynk.com/sunwincluborg1\"><span style=\"\">https://www.fitlynk.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://poipiku.com/13391881/\"><span style=\"\">https://poipiku.com/13391881/</span></a></p>\n<p><a href=\"https://www.vid419.com/home.php?mod=space&amp;uid=3481927\"><span style=\"\">https://www.vid419.com/home.php?mod=space&amp;uid=3481927</span></a></p>\n<p><a href=\"https://jump.5ch.net/?https://sun-winclub.org\"><span style=\"\">https://jump.5ch.net/?https://sun-winclub.org</span></a></p>\n<p><a href=\"https://lamsn.com/home.php?mod=space&amp;uid=1915482\"><span style=\"\">https://lamsn.com/home.php?mod=space&amp;uid=1915482</span></a></p>\n<p><a href=\"https://www.circleme.com/sunwincluborg1\"><span style=\"\">https://www.circleme.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://protocol.ooo/ja/users/sunwin-club-c-ng-game-d-i-th-ng\"><span style=\"\">https://protocol.ooo/ja/users/sunwin-club-c-ng-game-d-i-th-ng</span></a></p>\n<p><a href=\"https://truckymods.io/user/477142\"><span style=\"\">https://truckymods.io/user/477142</span></a></p>\n<p><a href=\"https://www.slmath.org/people/102925?reDirectFrom=link\"><span style=\"\">https://www.slmath.org/people/102925?reDirectFrom=link</span></a></p>\n<p><a href=\"https://cannabis.net/user/218476\"><span style=\"\">https://cannabis.net/user/218476</span></a></p>\n<p><a href=\"https://www.syncdocs.com/forums/profile/sunwincluborg\"><span style=\"\">https://www.syncdocs.com/forums/profile/sunwincluborg</span></a></p>\n<p><a href=\"https://www.stylevore.com/user/sunwincluborg1\"><span style=\"\">https://www.stylevore.com/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://partecipa.poliste.com/profiles/sunwincluborg1/activity\"><span style=\"\">https://partecipa.poliste.com/profiles/sunwincluborg1/activity</span></a></p>\n<p><a href=\"https://decidim.calafell.cat/profiles/sunwincluborg1/activity\"><span style=\"\">https://decidim.calafell.cat/profiles/sunwincluborg1/activity</span></a></p>\n<p><a href=\"https://beteiligung.amt-huettener-berge.de/profile/sunwincluborg1/\"><span style=\"\">https://beteiligung.amt-huettener-berge.de/profile/sunwincluborg1/</span></a></p>\n<p><a href=\"https://freeimage.host/sunwincluborg1\"><span style=\"\">https://freeimage.host/sunwincluborg1</span></a></p>\n<p><a href=\"https://akniga.org/profile/1406632-sunwincluborg1/\"><span style=\"\">https://akniga.org/profile/1406632-sunwincluborg1/</span></a></p>\n<p><a href=\"https://www.directorylib.com/domain/sun-winclub.org\"><span style=\"\">https://www.directorylib.com/domain/sun-winclub.org</span></a></p>\n<p><a href=\"https://fabble.cc/sunwincluborg1\"><span style=\"\">https://fabble.cc/sunwincluborg1</span></a></p>\n<p><a href=\"http://qa.doujiju.com/index.php?qa=user&amp;qa_1=sunwincluborg1\"><span style=\"\">http://qa.doujiju.com/index.php?qa=user&amp;qa_1=sunwincluborg1</span></a></p>\n<p><a href=\"https://beteiligung.stadtlindau.de/profile/sunwincluborg1/\"><span style=\"\">https://beteiligung.stadtlindau.de/profile/sunwincluborg1/</span></a></p>\n<p><a href=\"https://participacion.cabildofuer.es/profiles/sunwincluborg1/activity?locale=en\"><span style=\"\">https://participacion.cabildofuer.es/profiles/sunwincluborg1/activity?locale=en</span></a></p>\n<p><a href=\"http://delphi.larsbo.org/user/sunwincluborg1\"><span style=\"\">http://delphi.larsbo.org/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://culturesbook.com/sunwincluborg1\"><span style=\"\">https://culturesbook.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://bioimagingcore.be/q2a/user/sunwincluborg1\"><span style=\"\">https://bioimagingcore.be/q2a/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://es.stylevore.com/user/sunwincluborg1\"><span style=\"\">https://es.stylevore.com/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://support.bitspower.com/support/user/sunwincluborg1\"><span style=\"\">https://support.bitspower.com/support/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://portfolium.com.au/sunwincluborg1\"><span style=\"\">https://portfolium.com.au/sunwincluborg1</span></a></p>\n<p><a href=\"https://ask.mallaky.com/?qa=user/sunwincluborg1\"><span style=\"\">https://ask.mallaky.com/?qa=user/sunwincluborg1</span></a></p>\n<p><a href=\"https://portfolium.com/sunwincluborg1\"><span style=\"\">https://portfolium.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://www.lingvolive.com/en-us/profile/c0ae7c82-3a1b-4324-b767-c13490be9862/translations\"><span style=\"\">https://www.lingvolive.com/en-us/profile/c0ae7c82-3a1b-4324-b767-c13490be9862/translations</span></a></p>\n<p><a href=\"https://www.noteflight.com/profile/7440da5874d8b56d1c68261751eca990e85b4b20\"><span style=\"\">https://www.noteflight.com/profile/7440da5874d8b56d1c68261751eca990e85b4b20</span></a></p>\n<p><a href=\"https://l2top.co/forum/members/sunwincluborg.166570/\"><span style=\"\">https://l2top.co/forum/members/sunwincluborg.166570/</span></a></p>\n<p><a href=\"http://www.muzikspace.com/profiledetails.aspx?profileid=132806\"><span style=\"\">http://www.muzikspace.com/profiledetails.aspx?profileid=132806</span></a></p>\n<p><a href=\"https://forum.issabel.org/u/sunwincluborg1\"><span style=\"\">https://forum.issabel.org/u/sunwincluborg1</span></a></p>\n<p><a href=\"https://kjtr.grrr.jp/kjtr/?sunwincluborg1\"><span style=\"\">https://kjtr.grrr.jp/kjtr/?sunwincluborg1</span></a></p>\n<p><a href=\"http://inuofebi.com/question/sunwincluborg1/\"><span style=\"\">http://inuofebi.com/question/sunwincluborg1/</span></a></p>\n<p><a href=\"https://expatguidekorea.com/profile/sunwincluborg1/\"><span style=\"\">https://expatguidekorea.com/profile/sunwincluborg1/</span></a></p>\n<p><a href=\"https://smallseo.tools/website-checker/sun-winclub.org\"><span style=\"\">https://smallseo.tools/website-checker/sun-winclub.org</span></a></p>\n<p><a href=\"https://raovat.nhadat.vn/members/sunwincluborg1-298969.html\"><span style=\"\">https://raovat.nhadat.vn/members/sunwincluborg1-298969.html</span></a></p>\n<p><a href=\"https://www.rwaq.org/users/sunwincluborg1\"><span style=\"\">https://www.rwaq.org/users/sunwincluborg1</span></a></p>\n<p><a href=\"https://www.myminifactory.com/users/sunwincluborg0\"><span style=\"\">https://www.myminifactory.com/users/sunwincluborg0</span></a></p>\n<p><a href=\"https://www.shippingexplorer.net/en/user/sunwincluborg1/270339\"><span style=\"\">https://www.shippingexplorer.net/en/user/sunwincluborg1/270339</span></a></p>\n<p><a href=\"https://savee.com/sunwincluborg1/\"><span style=\"\">https://savee.com/sunwincluborg1/</span></a></p>\n<p><a href=\"https://heylink.me/sunwincluborg1/\"><span style=\"\">https://heylink.me/sunwincluborg1/</span></a></p>\n<p><a href=\"https://photozou.jp/user/top/3446952\"><span style=\"\">https://photozou.jp/user/top/3446952</span></a></p>\n<p><a href=\"https://taittsuu.com/users/sunwincluborg1\"><span style=\"\">https://taittsuu.com/users/sunwincluborg1</span></a></p>\n<p><a href=\"https://fileforums.com/member.php?u=297449\"><span style=\"\">https://fileforums.com/member.php?u=297449</span></a></p>\n<p><a href=\"https://pxhere.com/en/photographer/4963950\"><span style=\"\">https://pxhere.com/en/photographer/4963950</span></a></p>\n<p><a href=\"https://connect.gt/user/sunwincluborg1\"><span style=\"\">https://connect.gt/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://gesoten.com/profile/detail/12682715\"><span style=\"\">https://gesoten.com/profile/detail/12682715</span></a></p>\n<p><a href=\"https://app.readthedocs.org/profiles/sunwincluborg11/\"><span style=\"\">https://app.readthedocs.org/profiles/sunwincluborg11/</span></a></p>\n<p><a href=\"https://participa.aytojaen.es/profiles/sunwincluborg1/activity\"><span style=\"\">https://participa.aytojaen.es/profiles/sunwincluborg1/activity</span></a></p>\n<p><a href=\"https://participation.bordeaux.fr/profiles/sunwincluborg1/activity\"><span style=\"\">https://participation.bordeaux.fr/profiles/sunwincluborg1/activity</span></a></p>\n<p><a href=\"https://entre-vos-mains.alsace.eu/profiles/sunwincluborg1/activity\"><span style=\"\">https://entre-vos-mains.alsace.eu/profiles/sunwincluborg1/activity</span></a></p>\n<p><a href=\"https://jobs.siliconflorist.com/employers/4086354-sunwincluborg1\"><span style=\"\">https://jobs.siliconflorist.com/employers/4086354-sunwincluborg1</span></a></p>\n<p><a href=\"https://letterboxd.com/sunwincluborg1/\"><span style=\"\">https://letterboxd.com/sunwincluborg1/</span></a></p>\n<p><a href=\"https://zimexapp.co.zw/sunwincluborg1\"><span style=\"\">https://zimexapp.co.zw/sunwincluborg1</span></a></p>\n<p><a href=\"https://cointr.ee/sunwincluborg1\"><span style=\"\">https://cointr.ee/sunwincluborg1</span></a></p>\n<p><a href=\"https://civitai.com/user/sunwincluborg1\"><span style=\"\">https://civitai.com/user/sunwincluborg1</span></a></p>\n<p><a href=\"https://rotorbuilds.com/profile/209510/\"><span style=\"\">https://rotorbuilds.com/profile/209510/</span></a></p>\n<p><a href=\"https://pixelfed.uno/sunwincluborg1\"><span style=\"\">https://pixelfed.uno/sunwincluborg1</span></a></p>\n<p><a href=\"https://findpenguins.com/sunwincluborg1\"><span style=\"\">https://findpenguins.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://www.jointcorners.com/sunwincluborg1\"><span style=\"\">https://www.jointcorners.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://naijamatta.com/sunwincluborg1\"><span style=\"\">https://naijamatta.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://www.elephantjournal.com/profile/sunwincluborg1/\"><span style=\"\">https://www.elephantjournal.com/profile/sunwincluborg1/</span></a></p>\n<p><a href=\"https://medibang.com/author/28072292/\"><span style=\"\">https://medibang.com/author/28072292/</span></a></p>\n<p><a href=\"https://audio.com/sunwincluborg1\"><span style=\"\">https://audio.com/sunwincluborg1</span></a></p>\n<p><a href=\"https://forums.maxperformanceinc.com/forums/member.php?u=243722\"><span style=\"\">https://forums.maxperformanceinc.com/forums/member.php?u=243722</span></a></p>\n<p><a href=\"https://forum.aigato.vn/user/sunwincluborg1\"><span style=\"\">https://forum.aigato.vn/user/sunwincluborg1</span></a></p>\n<p><a href=\"http://www.genina.com/user/editDone/5251543.page\"><span style=\"\">http://www.genina.com/user/editDone/5251543.page</span></a></p>\n<p><a href=\"https://malt-orden.info/userinfo.php?uid=453312\"><span style=\"\">https://malt-orden.info/userinfo.php?uid=453312</span></a></p>\n<p><a href=\"https://www.iglinks.io/cosgriffmeiser62-nno?preview=true\"><span style=\"\">https://www.iglinks.io/cosgriffmeiser62-nno?preview=true</span></a></p>\n<p><a href=\"https://www.hostboard.com/forums/members/sunwincluborg1.html\"><span style=\"\">https://www.hostboard.com/forums/members/sunwincluborg1.html</span></a></p>\n<p><a href=\"https://infiniteabundance.mn.co/members/39060817\"><span style=\"\">https://infiniteabundance.mn.co/members/39060817</span></a></p>\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 166270,
            "forum_user": {
                "id": 166034,
                "user": 166270,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b4b93832a6ae4c3288fc5eeb123013b4?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-31T16:56:16.450274+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sunwincluborg1",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sunwin-club-cong-game-oi-thuong-uy-tin-2026",
        "pk": 4563,
        "published": false,
        "publish_date": "2026-03-31T16:58:12.621737+02:00"
    },
    {
        "title": "ECOSONICO: Augmenting Sound and Defining Soundscapes in a Local Interactive Space.",
        "description": "​In this paper we present a design of an augmented reality system developed for sound­art\ninstallation called ECOSONICO, the sound of biodiversity in Mexico (EcoSónico La biodiversidad\nsonora en México), project sponsored by The National Phonoteca of México and the Department of\nEnvironment and Natural Resources of México (SEMARNAT) .\nThe propose of the installation was to create a user individual sound experience in a shared space\nwith other participants through the selection of several surroundings soundscapes contained in a\nmobile device (Ipod Touch) that allows the user to navigate in an augmented sound reality. This was\nachieved by detecting the user position and orientation using a fiducial tracking system (Reactivison)1\n, that reports his position to the mobile device associated to the fiducial tracking, where an algorithm\nof binaural spatialization of virtual sound objects permits the user identify and navigate into the\nsoundscape.\nFinally we present future ideas to interact with the system On Line in the definition of the\nsoundscapes, also present ideas of how to enrich the binaural algorithm through the implementation\ndigital signal time processing.",
        "content": "<p>https://www.researchgate.net/publication/357605076_ECOSONICO_Augmenting_Sound_and_defining_Soundscapes_in_a_local_Interactive_Space</p>",
        "topics": [
            {
                "id": 1194,
                "name": "augmented reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 158,
                "name": "Network",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1117,
                "name": "virtualisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4339,
            "forum_user": {
                "id": 4337,
                "user": 4339,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b50cbca6d15e6072f9f80d85b809d33d?s=120&d=retro",
                "biography": "Mi nombre es José Manuel Mondragón Cruz, y soy un especialista en tecnologías musicales e interactivas con una sólida trayectoria en la creación, desarrollo y enseñanza de sistemas de producción sonora y multimedia. Mi experiencia abarca más de dos décadas en la investigación, producción y formación dentro de la música, el diseño sonoro y la tecnología aplicada.\n\nDesde mis inicios, mi pasión por la música y la tecnología me llevó a formarme como Licenciado en Composición Musical por la Escuela Nacional de Música de la UNAM, y posteriormente a obtener una Maestría en Tecnología Musical, con un enfoque en sistemas de colaboración musical aumentada. Mi trabajo académico ha estado siempre guiado por la convergencia entre el arte sonoro, la creatividad colaborativa y el desarrollo tecnológico, áreas en las que he desarrollado investigaciones sobre realidad aumentada, interacción hombre-máquina y sistemas interactivos para la creación musical.",
                "date_modified": "2025-08-23T01:07:05.445076+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "pepomondragon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ecosonico-augmenting-sound-and-defining-soundscapes-in-a-local-interactive-space",
        "pk": 3624,
        "published": true,
        "publish_date": "2025-08-18T18:57:27.127459+02:00"
    },
    {
        "title": "RAVE Model Challenge",
        "description": "L'objectif de ce challenge est de soutenir les auteurs des meilleurs modèles et d'établir collectivement un répertoire de modèles RAVE, permettant à chacun de bénéficier de la richesse et de la variété des approches dans le domaine du transfert de timbre/musique.",
        "content": "<p><img src=\"/media/uploads/images/rave_model_challenge_v5.jpeg\" alt=\"\" width=\"301\" height=\"301\" /></p>\r\n<h1><b>DESCRIPTION:</b></h1>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/rave/\"><span style=\"font-weight: 400;\">RAVE (autoEncodeur variationnel audio en temps r&eacute;el)</span></a><span style=\"font-weight: 400;\"> est un algorithme con&ccedil;u pour la synth&egrave;se de formes d'onde audio de haute qualit&eacute; en temps r&eacute;el &agrave; l'aide de r&eacute;seaux neuronaux. Il exploite une architecture d'auto-encodeur variationnel (VAE), qui compresse les donn&eacute;es audio en une repr&eacute;sentation latente compacte, permettant une reconstruction efficace des signaux audio.&nbsp;</span></p>\r\n<p><span style=\"font-weight: 400;\">Les principales fonctionnalit&eacute;s de RAVE incluent&nbsp;:&nbsp;</span></p>\r\n<ul>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">G&eacute;n&eacute;ration audio rapide et de haute qualit&eacute;&nbsp;: il excelle dans la production d'un son pr&eacute;cis en temps r&eacute;el, ce qui le rend id&eacute;al pour les applications interactives (20x en temps r&eacute;el &agrave; une fr&eacute;quence d'&eacute;chantillonnage de 48 kHz sur un processeur standard)</span></li>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Utilisation en temps r&eacute;el : Int&eacute;gr&eacute; &agrave; des outils comme Max et Pure Data (Pd), RAVE peut &ecirc;tre utilis&eacute; avec le d&eacute;codeur nn~ pour la g&eacute;n&eacute;ration et la transformation du son en temps r&eacute;el. Un </span><a href=\"https://forum.ircam.fr/projects/detail/rave-vst/\"><span style=\"font-weight: 400;\">Plugin VST</span></a><span style=\"font-weight: 400;\"> le rend facile &agrave; utiliser dans n&rsquo;importe quelle DAW.</span></li>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Applications&nbsp;: les utilisations courantes incluent la synth&egrave;se audio, la transformation du timbre et le transfert de style.</span></li>\r\n</ul>\r\n<p><span style=\"font-weight: 400;\">En bref, RAVE est un outil puissant de g&eacute;n&eacute;ration audio en temps r&eacute;el, offrant &agrave; la fois vitesse et qualit&eacute;.</span></p>\r\n<p><span style=\"font-weight: 400;\">En seulement quelques mois, </span><a href=\"https://forum.ircam.fr/collections/detail/rave/\"><span style=\"font-weight: 400;\">RAVE</span></a><span style=\"font-weight: 400;\"> a popularis&eacute; la cr&eacute;ation de mod&egrave;les &agrave; partir d'enregistrements audio, gr&acirc;ce notamment &agrave; la publication d&rsquo;</span><a href=\"https://forum.ircam.fr/article/detail/tutoriel-rave-and-nn/\"><span style=\"font-weight: 400;\">une s&eacute;rie de tutoriels</span></a><span style=\"font-weight: 400;\"> et du </span><a href=\"https://github.com/acids-ircam/RAVE\"><span style=\"font-weight: 400;\">code source ouvert</span></a><span style=\"font-weight: 400;\">. Une </span><a href=\"https://discord.gg/ygSqsj5pVH\"><span style=\"font-weight: 400;\">communaut&eacute; bouillonnante</span></a><span style=\"font-weight: 400;\"> des utilisateurs s'est empar&eacute;e de l&rsquo;algorithme, et </span><a href=\"https://acids-ircam.github.io/rave_models_download\"><span style=\"font-weight: 400;\">de nombreux mod&egrave;les ont &eacute;merg&eacute;</span></a><span style=\"font-weight: 400;\">. M&ecirc;me si ces mod&egrave;les peuvent &ecirc;tre assez co&ucirc;teux &agrave; produire (une vingtaine d&rsquo;heures GPU), tr&egrave;s peu ont &eacute;t&eacute; publi&eacute;s jusqu&rsquo;&agrave; pr&eacute;sent, souvent en raison de probl&egrave;mes de droits d&rsquo;auteur. Ce d&eacute;fi concerne des mod&egrave;les entra&icirc;n&eacute;s sur des enregistrements personnels dont les auteurs poss&egrave;dent tous les droits.</span></p>\r\n<p><span style=\"font-weight: 400;\">L'objectif de ce challenge est d'accompagner les auteurs des meilleurs mod&egrave;les et de constituer collectivement un r&eacute;pertoire de mod&egrave;les </span><a href=\"https://forum.ircam.fr/collections/detail/rave/\"><span style=\"font-weight: 400;\">RAVE</span></a><span style=\"font-weight: 400;\">, permettant &agrave; chacun de b&eacute;n&eacute;ficier de la richesse et de la vari&eacute;t&eacute; des approches dans le domaine du transfert de timbre/musique.&nbsp;</span></p>\r\n<p><span style=\"font-weight: 400;\">Le d&eacute;fi est organis&eacute; par la plateforme </span><a href=\"https://dafneplus.eng.it/\" target=\"_blank\">DAFNE+</a><span style=\"font-weight: 400;\">, qui promeut le partage de contenus par l&rsquo;utilisation des NFTs.&nbsp;</span></p>\r\n<p><span style=\"font-weight: 400;\">Un vote du public attribue trois prix aux participants.&nbsp;</span></p>\r\n<h1><b>PRIX:</b></h1>\r\n<p><span style=\"font-weight: 400;\">La remise des prix aura lieu lors </span><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><span style=\"font-weight: 400;\">des ateliers du Forum Ircam 2025</span></a><span style=\"font-weight: 400;\">, entre le 26 et le 28 mars 2025 &agrave; l'IRCAM, Paris.</span></p>\r\n<ul>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">1&egrave;re r&eacute;compense : 2000&euro; plus un an d'adh&eacute;sion Premium au Forum Ircam</span></li>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">2&egrave;me r&eacute;compense : 1000&euro; plus un an d'adh&eacute;sion Premium au Forum Ircam</span></li>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">3&egrave;me r&eacute;compense : 500&euro; plus un an d'adh&eacute;sion Premium au Forum Ircam</span></li>\r\n</ul>\r\n<p><span>Si plusieurs candidatures ont le m&ecirc;me nombre de votes gagnants, les montants de leur prix et des prix suivants seront partag&eacute;s entre eux.&nbsp;</span>Par exemple :&nbsp;</p>\r\n<div>\r\n<ul>\r\n<li>si deux candidats ont le plus grand score ex-aequo et un troisi&egrave;me le score suivant, les deux premiers se partageront (2000+1000)/2 = 1500&euro; et le troisi&egrave;me aura le 3&egrave;me prix donc 500&euro;</li>\r\n<li>si un candidat a le plus grand nombre de votes (1er prix de 2000&euro;) et 3 candidats se partagent le second score de votes, leur prix &agrave; chacun sera (1000+500)/3 = 500&euro;</li>\r\n</ul>\r\n</div>\r\n<h1><b>DATES IMPORTANTES&nbsp;:</b></h1>\r\n<ul>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Publication de l'appel en novembre 2024 sur forum.ircam.fr et sur <a href=\"https://dafneplus.eu/\">dafneplus.eu</a></span></li>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Plateforme DAFNE+ de l&rsquo;appel, ouverte du 1er d&eacute;cembre 2025 (midi CET) au <span style=\"text-decoration: line-through;\">31 janvier 2025 (midi CET)</span>&nbsp;10 F&eacute;vrier 2025 (midi CET) -<em> Extension de la date limite</em></span></li>\r\n<li style=\"font-weight: 400;\"><a href=\"https://forum.ircam.fr/article/detail/rave-model-challenge-vote/\">Vote du public du 11 f&eacute;vrier 2025 (midi CET) au 28 f&eacute;vrier 2025 (midi CET).&nbsp;</a></li>\r\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Remise des prix en mars 2025 lors des ateliers du Forum Ircam 2025.</span></li>\r\n</ul>\r\n<h1><b>SOUMISSION:</b></h1>\r\n<p><span style=\"font-weight: 400;\">Pour participer, les participants doivent uploader leur candidature via le gestionnaire de contenu de la plateforme&nbsp;<a href=\"https://dafneplus.eng.it/\" target=\"_blank\">DAFNE+</a></span><span style=\"font-weight: 400;\">, avec le contenu suivant dans un seul fichier zip, avec le type \"AI model\" :</span></p>\r\n<ul>\r\n<li><span style=\"font-weight: 400;\">Le mod&egrave;le au format .ts. Mode &laquo; forward &raquo; uniquement.</span>\r\n<ul>\r\n<li><span style=\"font-weight: 400;\">Description du mod&egrave;le&nbsp;: une description du mod&egrave;le en termes de&nbsp;</span></li>\r\n<li><span style=\"font-weight: 400;\">Types de sons utilis&eacute;s (description, instruments, genre, playlist...)</span></li>\r\n<li><span style=\"font-weight: 400;\">Dur&eacute;e totale du corpus audio utilis&eacute; pour l'entra&icirc;nement.</span></li>\r\n<li><span style=\"font-weight: 400;\">Intention artistique : avez vous une intention artistique sp&eacute;ciale avec ce mod&egrave;le?</span></li>\r\n<li><span style=\"font-weight: 400;\">Une image illustrative pr&eacute;sentant le mod&egrave;le.</span></li>\r\n</ul>\r\n</li>\r\n<li><span style=\"font-weight: 400;\">Des informations compl&eacute;mentaires optionnelles.</span></li>\r\n<li>Exemples de sorties du mod&egrave;le:&nbsp;un ensemble de fichiers audio de sortie montrant l'effet du mod&egrave;le&nbsp;:\r\n<ul>\r\n<li><span style=\"font-weight: 400;\">5 g&eacute;n&eacute;rations libres de 15sec, en mode &ldquo;MSprior&rdquo; ou &ldquo;decoder&rdquo;</span></li>\r\n<li><span style=\"font-weight: 400;\">5 transformations en mode &ldquo;forward&rdquo; de 5 sons impos&eacute;s, t&eacute;l&eacute;chargeables via les liens suivants :</span>\r\n<ul>\r\n<li><span style=\"font-weight: 400;\">chantant twinkle twinkle, Mr. moon.wav par bectec -- <a href=\"https://freesound.org/s/665123/\" target=\"_blank\">https://freesound.org/s/665123/</a> -- Licence : Creative Commons 0</span></li>\r\n<li><span style=\"font-weight: 400;\">106 BPM Drum Loop 1.wav par esares -- <a href=\"https://freesound.org/s/431874/\" target=\"_blank\">https://freesound.org/s/431874/</a> -- Licence : Creative Commons 0</span></li>\r\n<li><span style=\"font-weight: 400;\">entrelac&eacute; 0T_50mm par Setuniman -- <a href=\"https://freesound.org/s/165172/\" target=\"_blank\">https://freesound.org/s/165172/</a> -- Licence : Attribution NonCommercial 4.0</span></li>\r\n<li><span style=\"font-weight: 400;\">deep house drum beat.wav par djfroyd -- <a href=\"https://freesound.org/s/349708/\" target=\"_blank\">https://freesound.org/s/349708/</a> -- Licence : Attribution 3.0</span></li>\r\n<li><span style=\"font-weight: 400;\">15-Second Strum par ViraMiller -- <a href=\"https://freesound.org/s/745885/\" target=\"_blank\">https://freesound.org/s/745885/</a> -- Licence : Attribution 4.0</span></li>\r\n</ul>\r\n</li>\r\n</ul>\r\n</li>\r\n<li><span style=\"font-weight: 400;\">Courte biographie (400 mots maximum, en anglais) et photo haute d&eacute;finition de l'auteur.</span></li>\r\n<li><span>Copyright de l'entra&icirc;nement du mod&egrave;le : une lettre d&rsquo;intention pr&eacute;cisant le respect du droit d&rsquo;auteur conform&eacute;ment &agrave; la licence CC BY-NC (voir ci-dessous) et d&eacute;clarant les sources tierces si utilis&eacute;es.</span></li>\r\n</ul>\r\n<p>Pour soumettre votre mod&egrave;le au challenge sur la plateforme DAFNE+, merci de suivre <a href=\"https://forum.ircam.fr/article/detail/how-to-apply-to-the-rave-model-challenge/\">ce tutoriel</a>.&nbsp;</p>\r\n<p><a href=\"https://dafneplus.eng.it/ipfs/Qmbfqgb5kQ5szF2bXA9hS4gy8z3SMPsUN5aqD4vpX5svn7\">Un template de soumission</a> est disponible dans les contenus associ&eacute;s &agrave; la comp&eacute;tition.</p>\r\n<p><span style=\"font-weight: 400;\">Seules les propositions compl&egrave;tes seront prises en consid&eacute;ration. </span></p>\r\n<h1><b>&Eacute;VALUATION</b></h1>\r\n<p><span style=\"font-weight: 400;\">Les trois prix seront d&eacute;cern&eacute;s par vote des membres inscrits sur la&nbsp;<span style=\"font-weight: 400;\">plateforme&nbsp;<span><a href=\"https://dafneplus.eng.it/\" target=\"_blank\">DAFNE+</a></span></span></span><span style=\"font-weight: 400;\">&nbsp;(inscription gratuite), r&eacute;compensant les trois mod&egrave;les ayant obtenu le plus grand nombre de votes (par ordre d&eacute;croissant pour les 3 prix). Les mod&egrave;les seront publi&eacute;s sur la&nbsp;<span style=\"font-weight: 400;\">plateforme&nbsp;<span><a href=\"https://dafneplus.eng.it/\" target=\"_blank\">DAFNE+</a></span></span>&nbsp;avec le tag &laquo; RAVE Model Challenge &raquo;. A partir du 1er f&eacute;vrier 2025, les membres pourront t&eacute;l&eacute;charger les mod&egrave;les pour les &eacute;valuer, ainsi qu'&eacute;couter les fichiers audio pour voter pour leur mod&egrave;le pr&eacute;f&eacute;r&eacute;. Le lien vers la plateforme de vote sera fourni le 1er f&eacute;vrier 2025 et le vote se cl&ocirc;turera le 28 f&eacute;vrier (midi CET).&nbsp;</span></p>\r\n<h1><b>CONDITIONS DE LICENCE DES MOD&Egrave;LES SOUMIS</b></h1>\r\n<p><span style=\"font-weight: 400;\">Les mod&egrave;les RAVE soumis au concours seront publi&eacute;s en acc&egrave;s libre (sans frais bitcoin) sur la plateforme DAFNE+ sous licence Creative Commons V4 avec option BY-NC.</span></p>\r\n<h2>Links:</h2>\r\n<ul>\r\n<li>RAVE Model Challenge:<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/rave-model-challenge/\">https://forum.ircam.fr/collections/detail/rave-model-challenge/</a><a href=\"https://forum.ircam.fr/collections/detail/rave-model-challenge/\"></a></li>\r\n<li>RAVE collection:<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/rave/\">https://forum.ircam.fr/collections/detail/rave/</a></li>\r\n<li>DAFNE+ Platform:&nbsp;<a href=\"https://dafneplus.eng.it\">https://dafneplus.eng.it</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Website:&nbsp;<a href=\"https://dafneplus.eu\">https://dafneplus.eu</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Discord:&nbsp;<a href=\"https://discord.gg/aR6VvV9Ttw\">https://discord.gg/aR6VvV9Ttw</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Survey:&nbsp;<a href=\"https://forms.gle/czcJyXhmthFkN5V48\">https://forms.gle/czcJyXhmthFkN5V48</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>YT tutorials playlist:&nbsp;<a href=\"https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ\">https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>YT intro to Use-Case 2:&nbsp;<a href=\"https://dafneplus.eu/2024/02/interview-with-hugues-vinet-ircam-explaining-use-case-2/\">https://dafneplus.eu/2024/02/interview-with-hugues-vinet-ircam-explaining-use-case-2/</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Newsletter:&nbsp;<a href=\"https://dafneplus.eu/contact\">https://dafneplus.eu/contact</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Contact:&nbsp;<a href=\"mailto:info@dafneplus.eu\">info@dafneplus.eu</a></li>\r\n</ul>\r\n<h1><img src=\"/media/uploads/rave_model_challenge_banniere.png\" alt=\"\" width=\"2778\" height=\"676\" style=\"font-size: 14px;\" /></h1>",
        "topics": [
            {
                "id": 2375,
                "name": "challenge",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1745,
                "name": "nn~",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "rave-model-challenge",
        "pk": 3096,
        "published": true,
        "publish_date": "2024-11-06T20:32:14+01:00"
    },
    {
        "title": "Where There Is Singing, There You Will Settle Down by Caroline Vogel & Juice",
        "description": "“Where There Is Singing, There You Will Settle Down” is a multi-channel sound installation emerging from a participatory workshop on mapping coral reef soundscapes through co-created, speculative instruments. Drawing on the acoustic ecology of living reefs, where the layered polyphony of marine life guides coral larvae to settle down and thrive, the project makes the entanglements of these ecosystems perceptible through sound. As coral reefs grow quieter under the pressure of climate change and biodiversity loss, the installation opens a space to listen, to mourn, and to reflect on reefs as vital, relational communities sustained through sound. \r\nThe project originally collaborated with the Coral Research Lab based in the Horniman Museum and Gardens London and was selected for the London Design Festival 2025.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3a95444cd7af470e35e2f5cd85663616.jpg\" /></p>\r\n<p>Coral reefs are living soundscapes. Like human cities, they hum with constant activity: snapping shrimp crackle like static, fish produce rhythmic pulses, and waves refract through complex reef structures to create a dense acoustic environment. This polyphony is not incidental, as marine biologists and acoustic ecologists have found that reef sound plays a crucial role in guiding coral larvae and juvenile fish toward suitable habitats, acting as a multispecies signal of safety and vitality (Vermeij M. J. A et al., 2010; Radford et al., 2011). In this sense, reefs quite literally sing life into being.<br /><br /><em>Where There Is Singing, There You Will Settle Down</em> takes this ecological phenomenon as both material and metaphor. The multi-channel sound installation concludes a participatory workshop in which participants collectively mapped coral reef soundscapes through listening, speculation, and the co-creation of experimental instruments. The project approaches reefs as relational, acoustic systems defined by ongoing vibrational exchange.<br /><br />Under the conditions of the climate crisis, ocean warming, acidification, and extractive human activity, coral reefs are rapidly losing both their biological diversity and their acoustic complexity. Degraded reefs become quieter, losing the sonic cues that attract new life and reinforcing cycles of ecological collapse (Gordon, 2020). Silence, here, is a symptom of systemic breakdown.<br /><br />The project is informed by posthumanist and more-than-human thought that challenges the separation of humans from ecological systems. Drawing on Donna Haraway&rsquo;s call to stay with the trouble (2016), the project resists solutions framed solely through technological optimism or managerial sustainability and cultivates practices of attunement: listening as an ethical act, and sound as a medium through which humans might re-enter damaged ecosystems as responsive, accountable participants rather than distant observers. Similarly, Anna Tsing&rsquo;s notion of arts of noticing (2015) resonates within the work's methodology, which emphasizes slow listening, collective interpretation, and speculative making as ways of engaging with ecological complexity.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/858a0d884854eb8129f9a951a633c28e.png\" /><br /><br />Through performative re-sounding, the installation unsettles binaries such as natural/artificial, human/nonhuman, and knowledge/practice. The co-created sounds and speculative instruments do not attempt to reproduce coral reefs, but rather to enter into a dialogue. These speculative instruments function as mediators and hybrid objects that sit between research tool, musical device, and storytelling apparatus.<br /><br />The sound design of the project is a mixture of real coral reefs from archives and recordings, plus simulated coral reefs recorded using hydrophones through Max/MSP, plugins ASAP and GRM bundles, combined with sea recordings using binaural microphones. Pro Tools was used to construct a quadraphonic spatial audio experience. A custom-designed interface enables touch-sensitive interaction, allowing audiences to modulate and navigate the soundscape bodily rather than cognitively.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>Gordon, T. A. C. (2020). <em>The Changing Song of the Sea: Soundscapes as indicators and drivers of ecosystem transition on tropical coral reefs</em>. Dissertations &amp; Theses, University of Exeter.</p>\r\n<p>Haraway, D. (2016). <em>Staying with the Trouble: Making Kin in the Chthulucene</em>. Duke University Press.</p>\r\n<p>Tsing, A. L. (2015). <em>The Mushroom at the End of the World</em>. Princeton University Press.</p>\r\n<p>Radford, C. A., Stanley, J. A., Simpson, S. D., &amp; Jeffs, A. G. (2011). <em>Juvenile coral reef fish use sound to locate habitats</em>. Coral Reefs 30(2):295-305.</p>\r\n<p>Vermeij M. J. A., Marhaver K. L., Huijbers C. M., Nagelkerken I., Simpson S. D. (2010). Coral Larvae Move toward Reef Sounds. PLoS ONE 5(5): e10660. https://doi.org/10.1371/journal.pone.0010660</p>",
        "topics": [
            {
                "id": 4216,
                "name": "#acoustic ecologies",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4217,
                "name": "#coral reefs",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4218,
                "name": "#more-than-human",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4219,
                "name": "#multispecies",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 154032,
            "forum_user": {
                "id": 153808,
                "user": 154032,
                "first_name": "Caroline",
                "last_name": "Vogel",
                "avatar": "https://forum.ircam.fr/media/avatars/a28abc7d0128f35342057f14fb3715d4.jpg",
                "avatar_url": "/media/cache/f1/3e/f13ef682096263da38f613fb41e94d86.jpg",
                "biography": "Caroline Vogel is a practice-based design researcher whose work engages with storytelling, speculative design, and collective worldmaking. Her practice explores how relationships between more-than-human actors, technologies, and humans can be imagined and negotiated, drawing on posthumanist theory and the environmental humanities. Her work takes the form of workshops, interactive installations, and experimental performances that create open-ended spaces for reflection and speculation.",
                "date_modified": "2026-02-13T16:53:49.569996+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "attapulgit",
            "first_name": "Caroline",
            "last_name": "Vogel",
            "bookmarks": []
        },
        "slug": "where-there-is-singing-there-you-will-settle-down-by-caroline-vogel-juice",
        "pk": 4340,
        "published": true,
        "publish_date": "2026-02-10T17:20:53+01:00"
    },
    {
        "title": "Electronic Poem by Kittiphan Janbuala",
        "description": "Electronic Poem is an experimental audiovisual installation. The work is based on generative processing and employs Unicode characters, an international standard for language encoding from the selected digital image files as a primary source content for sound and visualization. A Unicode character has no clear or relevant meaning to human language. However, in the absence of meaning and the speculation of combining various language symbols, there is an embedded form, rhythm, and sound texture of poetry derived from interpretation through the form of electronic poetry",
        "content": "<h2>Electronic Poem by Kittiphan Janbuala</h2>\r\n<p>Electronic Poem, the experimental audiovisual installation explores a non-sense poem to human perception rather than a conventional narrative by open-ended to interact by any level of participants. The process of work creates a generative composition through a degree of freedom determined by the artist to explore instantly sound and visualization moments.&nbsp; Regarding the source of generating materials, based on selected digital image files by the artist, which are converted into Unicode, a computing language, and converted to sound through sonification procedures; using direct translation (audification) of an image&rsquo;s pixel data into sound and also using pre-mapping the Unicode into speech synthesis through computer&rsquo;s voices. The project rejects the conventional notion of creating a rhyme into a glitch domain that results in noisiness and also explores a hidden rhyme or unexpected noise generation as a focusing point of the experimental audiovisual installation. <img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8fa9e2666b5eab552a19fe5af4721bf2.png\" /></p>",
        "topics": [
            {
                "id": 2285,
                "name": "Experimental Audiovisual, Glitch, Randomness, Sonification",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 23342,
            "forum_user": {
                "id": 23324,
                "user": 23342,
                "first_name": "Kittiphan",
                "last_name": "Janbuala",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_3513_2.jpg",
                "avatar_url": "/media/cache/49/1f/491f770e95ee305cda47c11c7077b123.jpg",
                "biography": "An intermedia composer, Kittiphan is currently developing his aesthetic as an interdisciplinary artist specializing in audiovisual performance. In recent years, he has been actively involved in several collaborative projects. His achievements include being a finalist for the Young Thai Artist Award in 2004, 2006, 2008, and 2011, as well as winning first prize in the Asian Composer League Young Composer Competition in 2012. He has participated in numerous events, including the Thailand International Composer Festival, Echo Festival 2011, Sound Bridge 2013, Zoo Electronica 2014, Sonic Moon Festival 2015, SETTS#3, Hearing Visual, Looking Sound, Asia Computer Music Project 2018, Thailand New Music and Arts Symposium, KEAMS 2020, Symposium on Spatial Sound Arts, ICMC 2020/2021, Hypersounds Event VIII, Glitch Art is Dead 2022, FU:BAR Glitch Art Exhibition 2022, Int-Act 2022, Media Wander-Symposium on Media, Arts and Design, and Bangkok Design Week 2024. His approach to live sound and visual processing involves glitch techniques, cut-up methods, and manipulating existing materials through sonification procedures. Kittiphan earned a Doctor of Musical Arts degree from the College of",
                "date_modified": "2026-02-07T05:28:36.582379+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 946,
                        "forum_user": 23324,
                        "date_start": "2024-10-02",
                        "date_end": "2025-10-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 582,
                                "membership": 946
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "icekj",
            "first_name": "Kittiphan",
            "last_name": "Janbuala",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 34,
                    "user": 23342,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 195,
                    "user": 23342,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 126,
                    "user": 23342,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "electronic-poem",
        "pk": 3037,
        "published": true,
        "publish_date": "2024-10-19T19:10:12+02:00"
    },
    {
        "title": "Chroma: A Derek Jarman Project - C-LAB",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris.",
        "content": "<p>&ldquo;A reconstruction of the Jarman-esque gender identification and colors as a result of the rereading of Derek Jarman&rsquo;s Blue and Chroma. Also a reflection on the dilemma of post-humanity. The feverish love of technology, namely, the &lsquo;Jarman hypothesis,&rsquo;has probably branched out a nihilistic &lsquo;Jarman derivation.&rsquo; This piece takes on the form of the &lsquo;audio-visual theater,&rsquo; serving as a righteous pathway to triggering different sensory experiences. &rdquo; ─ Lin Yu-shi, Taishin Arts Award</p>\r\n<p>&ldquo;A space that liberates the audiences from VR back into reality. Using the VR device to simulate the visual state of Derek Jarman before he went blind. The blurriness and the fogginess resemble the hallucinatory effects of certain psychedelic drugs. The boundaries that divide reality and imagination fall apart. Color temperatures of other audiences&rsquo; body heat seen through the IR sensor--all so clearly. The technology used to inspect and examine the health of the public during the pandemic becomes indescribably sensual, and, sexy.&rdquo; ─ Hsu Yao-wen, Fun Screen</p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/chroma_01_photo_&copy;_taiwan_contemporary_culture_lab.jpg\" alt=\"\" width=\"1336\" height=\"2048\" /></p>\r\n<p></p>\r\n<p><strong>Introduction</strong>&nbsp;</p>\r\n<p>Techno Love: Life of Derek Jarman</p>\r\n<p>Shuttling through Time and Space with Colours</p>\r\n<p>An Alternative Audio Narrative</p>\r\n<p>&nbsp;</p>\r\n<p>We will not be wiped out by this malicious virus.On the contrary, we will become a brave new species,more beautiful, more like the sacrilegious you.</p>\r\n<p>In the 1980&rsquo;s, prior to his death in 1994, British filmmaker Derek Jarman was losing his sight and constantly suffered from AIDS-related complications. Haunted by the expected death, Jarman began to envisage an AI metahuman immune to all mortal illnesses, plus a biodiverse utopia, a brave new world for Sons of Jarman.</p>\r\n<p>Following in the spiritual and creative footsteps of Jarman&rsquo;s last movie, Blue, the making of Chroma: A Derek Jarman Project is attempted at re-associating colours with gay identities, personal memories and queer culture. Also, Chroma exploits the convention of the narrative of the audio theatre to re-invent the way of seeing when one gradually loses his eyesight.</p>\r\n<p>Meanwhile, the way future was imagined and looked at back in the 70&rsquo;s and the 80&rsquo;s has now been recoded with new technology. The external conflicts of the gays versus the straights, and of the mainstream versus the marginal, too, have now become a tempest in the teacup: a newly launched war more about labelling, oppression, and contradictions among the gays.</p>\r\n<p>&nbsp;</p>\r\n<p><img src=\"/media/uploads/chroma_03_&copy;_taiwan_contemporary_culture_lab.jpg\" width=\"1552\" height=\"772\" /></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Production</strong></p>\r\n<p>Taiwan Contemporary Culture Lab (C-LAB)</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Creation</strong> <strong>Team</strong></p>\r\n<p>Director: Baboo LIAO</p>\r\n<p>Playwright: U-Lai CHEN</p>\r\n<p>Sound Artist: Ge-Wei LIN</p>\r\n<p>VR Concept and Set Design: Huei-Ming CHANG</p>\r\n<p>VR Video Artist: Yu-Jie HUANG</p>\r\n<p>Vocal: Chao-Yang WANG</p>\r\n<p>Sound Technique and Engineer: C-LAB Taiwan Sound Lab</p>\r\n<p>System Integration: Yu-Ci HUANG</p>\r\n<p>3D modeling: Mark CHANG、Teom CHEN</p>\r\n<p>Graphic design: Aaron NIEH</p>\r\n<p>Translation: Sean YEH</p>\r\n<p>Director Assistant: Chang-En TING</p>\r\n<p>Technical Consultant: Aluan WANG</p>\r\n<p>Sound Consultant: Chia-Hui CHEN</p>\r\n<p></p>\r\n<p></p>\r\n<p></p>\r\n<p></p>\r\n<p></p>",
        "topics": [],
        "user": {
            "pk": 31229,
            "forum_user": {
                "id": 31182,
                "user": 31229,
                "first_name": "Tom",
                "last_name": "Debrito",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d239346e0c19ec2b960555378b5fe912?s=120&d=retro",
                "biography": "Tom Debrito was the Events Coordination Manager of the IRCAM Forum for the year 2022-2023, as part of a work-study contract.\n\nHe was in charge of the coordination of the Forum Workshops 2022 with the New York University, the Forum Workshops 2023 in Paris and the Forum Workshops 2023 in Taipei in collaboration with the C-LAB. In addition, he handles communication and marketing related tasks to help the development of the IRCAM Forum.",
                "date_modified": "2023-10-30T12:25:43.859854+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 389,
                        "forum_user": 31182,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "debrito",
            "first_name": "Tom",
            "last_name": "Debrito",
            "bookmarks": []
        },
        "slug": "chroma-a-derek-jarman-project",
        "pk": 2065,
        "published": true,
        "publish_date": "2023-02-15T16:46:12+01:00"
    },
    {
        "title": "RAVE Model Challenge - Winners",
        "description": "Winners of the RAVE Model Challenge 2025",
        "content": "<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/images/rave_model_challenge_v5.jpeg\" /></p>\r\n<p>🎉 <strong>RAVE Model Challenge Winners Announced!</strong> 🎉</p>\r\n<p>We are thrilled to reveal the winners of the <strong>RAVE Model Challenge</strong>, a competition celebrating innovation in neural audio modeling. A huge congratulations to all participants for their impressive and creative contributions!</p>\r\n<p>🏆 <strong>1st Prize</strong> &ndash; <strong>&euro;2000 + one year IRCAM Forum Premium Membership</strong><br /><a href=\"https://forum.ircam.fr/projects/detail/black-latents/\">🎖️ <strong>Martin Heinze</strong></a></p>\r\n<p>🥈 <strong>2nd Prize</strong> &ndash; <strong>&euro;1000 + one year IRCAM Forum Premium Membership</strong><br /><a href=\"https://forum.ircam.fr/projects/detail/instant-albania/\">🎖️ <strong>Dylan Burchett &amp; Christopher Trapani</strong></a></p>\r\n<p>🥉 <strong>3rd Prize (tie)</strong> &ndash; <strong>&euro;250 + one year IRCAM Forum Premium Membership</strong><br /><a href=\"https://forum.ircam.fr/projects/detail/bootymachinenet_z_vinston_200701/\">🎖️ <strong>Tristan Zand</strong></a><br /><a href=\"https://forum.ircam.fr/projects/detail/random_v2/\">🎖️ <strong>Julien Bloit &amp; BeatSurfing</strong></a></p>\r\n<p>Congratulations to the winners, and thank you to everyone who participated in this edition! 🚀</p>\r\n<p><a href=\"https://forum.ircam.fr/article/detail/rave-model-challenge-proposals/\">Listen to </a>and <a href=\"https://forum.ircam.fr/collections/detail/rave-model-challenge-models/\">download all the models submitted to the Challenge</a>.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/thumbs/rave_model_challenge_banniere.png/rave_model_challenge_banniere-2778x676.png\" /></p>",
        "topics": [
            {
                "id": 2376,
                "name": "model",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1745,
                "name": "nn~",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2786,
                "name": "RAVE Model Challenge 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "rave-model-challenge-winners",
        "pk": 3369,
        "published": true,
        "publish_date": "2025-03-22T23:46:50+01:00"
    },
    {
        "title": "Z.A.R by Ling-Hsuan Huang",
        "description": "Z.A.R - An Exploration of the Expansion of Live Instrumental Performing Forms through Visual Media and Spatial Sound",
        "content": "<p><span>&nbsp;</span><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>This presentation will explore how \"spatial sound\" and \"visual design\" can expand the expressive forms of live instrumental performances. The presentation and demonstration will introduce how composers use spatial sound field technologies, particularly Spat5 from IRCAM, in combination with Wave Field Synthesis, to extend the presence of instrumental sound in space. The sound is visualized in real space and integrated with visual elements such as video projections and lighting design, creating a multisensory, immersive performance experience. The talk will also include a sharing of the composer&rsquo;s past works, explaining how instrumental performance is transformed, expanded, and reinterpreted in space. Special focus will be placed on the new work Z.A.R, which was premiered at 2025 C-LAB Sound Festival: DIVERSONICS. This piece features the guzheng as a solo instrument, combined with spatial electronic sound through Wave Field Synthesis and enhanced by visual projections, exploring the augmented presentation of sound and body movements in physical space. For example, laser lights trace the reverberations of the strings, or real-time video magnifies the details of the string vibrations, allowing the sound process to be \"visualized,\" thereby expanding the audience's perception of sound. From the perspectives of composition, sound design, and media arts, this lecture will present new possibilities for contemporary instrumental performance and reconsider the roles of virtual and physical spaces, perception, and performativity in sound art.</p>\r\n<p><img src=\"/media/uploads/huang1.png\" alt=\"\" width=\"1430\" height=\"873\" /></p>\r\n<p><img src=\"/media/uploads/huang2.png\" alt=\"\" width=\"2902\" height=\"1530\" /></p>\r\n<p><img src=\"/media/uploads/huang3.png\" alt=\"\" width=\"3848\" height=\"2168\" /></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3138,
                "name": "spatial sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 11339,
            "forum_user": {
                "id": 11336,
                "user": 11339,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee80dbe3679018f687104a05dd7c998f?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-04T17:03:55.205379+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 507,
                        "forum_user": 11336,
                        "date_start": "2023-10-13",
                        "date_end": "2024-10-13",
                        "type": 0,
                        "keys": [
                            {
                                "id": 61,
                                "membership": 507
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "lhuang",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "zar-an-exploration-of-the-expansion-of-live-instrumental-performing-forms-through-visual-media-and-spatial-sound",
        "pk": 4515,
        "published": true,
        "publish_date": "2026-03-15T22:58:11+01:00"
    },
    {
        "title": "Rendre la musique électronique accessible à tous : Un studio virtuel pour les compositeurs malvoyants - Butch Rovan",
        "description": "Les outils actuels de composition avec les technologies interactives ne sont pas accessibles aux compositeurs malvoyants - les paradigmes de patches graphiques les ont rendus conviviaux mais finalement hostiles à l'utilisation pour les aveugles. Cette présentation aborde ce défi, en discutant d'une nouvelle interface tangible et d'un environnement de programmation Max que j'ai conçu pour être entièrement accessible aux compositeurs aveugles. Le système offre de nouvelles possibilités non seulement aux malvoyants mais aussi aux compositeurs voyants.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Butch Rovan&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/butchrovan/\">Biographie</a></p>\r\n<p>Au cours des derni&egrave;res d&eacute;cennies, les outils &eacute;lectroniques ont consid&eacute;rablement &eacute;largi les possibilit&eacute;s d'expression musicale cr&eacute;ative. Les compositeurs modernes ont b&eacute;n&eacute;fici&eacute; de cette &eacute;volution en ayant acc&egrave;s &agrave; des technologies de plus en plus puissantes et conviviales. Pourtant, ces technologies ne sont pas accessibles &agrave; tous de la m&ecirc;me mani&egrave;re. L'omnipr&eacute;sence d'interfaces visuelles &agrave; faible contraste, de menus en cascade et de paradigmes de patchs graphiques a rendu la plupart des outils de composition de musique &eacute;lectronique non pas conviviaux, mais finalement hostiles aux malvoyants. Cet article rel&egrave;ve ce d&eacute;fi en pr&eacute;sentant une nouvelle interface et un environnement de programmation Max que j'ai con&ccedil;us pour &ecirc;tre enti&egrave;rement accessibles aux compositeurs aveugles.</p>\r\n<p>Aujourd'hui, les concepteurs de logiciels audio ont tendance &agrave; faire des concessions aux utilisateurs malvoyants en attendant d'eux qu'ils comprennent les menus et les interfaces complexes des programmes &agrave; l'aide de lecteurs d'&eacute;cran maladroits et inefficaces. J'ai pu constater directement l'inutilit&eacute; de cette approche en travaillant avec un compositeur tr&egrave;s talentueux qui se trouve &ecirc;tre &eacute;galement malvoyant. J'ai commenc&eacute; &agrave; me poser des questions : &Agrave; quoi cela pourrait-il ressembler d'aborder les solutions diff&eacute;remment, non pas en faisant des concessions, mais en r&eacute;alisant de r&eacute;els progr&egrave;s ? Concevoir de nouvelles solutions logicielles et mat&eacute;rielles bas&eacute;es sur les atouts plut&ocirc;t que sur les d&eacute;ficits ? Reconcevoir plut&ocirc;t que r&eacute;nover ? Ces questions m'ont amen&eacute; &agrave; r&eacute;fl&eacute;chir &agrave; toutes les autres modalit&eacute;s sensorielles - telles que le feedback tactile et auditif coupl&eacute; &agrave; l'interface - qui pourraient rendre un environnement logiciel totalement adapt&eacute; &agrave; tous les utilisateurs.</p>\r\n<p>Cette exploration a d&eacute;bouch&eacute; sur un syst&egrave;me logiciel/mat&eacute;riel original, dot&eacute; d'une interface utilisateur personnalis&eacute;e, qui permet aux compositeurs malvoyants d'acc&eacute;der pleinement aux capacit&eacute;s cr&eacute;atives du langage de programmation musicale Max. Cette technologie habilitante renverse l'in&eacute;galit&eacute; des outils actuellement disponibles en mettant en &oelig;uvre les principes de la conception universelle : rendre les outils accessibles, compr&eacute;hensibles et utilisables par toutes les personnes, ind&eacute;pendamment de leur &acirc;ge, de leur taille, de leur capacit&eacute; ou de leur handicap. Ma pr&eacute;sentation portera sur cette nouvelle interface musicale et sur les moyens particuliers qu'elle met en &oelig;uvre :</p>\r\n<ul>\r\n<li>Permet aux compositeurs malvoyants d'acc&eacute;der &agrave; des outils que leurs pairs voyants utilisent quotidiennement</li>\r\n<li>Explore l'accessibilit&eacute; de l'IHM</li>\r\n<li>Favorise l'&eacute;mergence de nouvelles possibilit&eacute;s pour les compositeurs non seulement malvoyants, mais aussi voyants.</li>\r\n</ul>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a></strong></p>",
        "topics": [
            {
                "id": 293,
                "name": "Accessible music technology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1798,
                "name": "HCI",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1797,
                "name": "interface",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 157,
                "name": "Real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1799,
                "name": "Universal Design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4057,
            "forum_user": {
                "id": 4055,
                "user": 4057,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/BR_promo_-_Little_Sister_-_IMG_2068.jpg",
                "avatar_url": "/media/cache/34/d4/34d4ce70aa0095c0e36223b84f7c0aa1.jpg",
                "biography": "Butch Rovan is a composer, media artist, and performer on the faculty of the Music and Multimedia Composition (MMC) program at Brown University. From 2013-16 he was chair of Music and from 2016-19 he was the inaugural faculty director of the Brown Arts Initiative. \n\nPrior to Brown, Rovan was a compositeur en recherche with the Real-Time Systems Team at IRCAM in Paris, and a faculty member at Florida State University and the University of North Texas, where he directed the Center for Experimental Music and Intermedia. Rovan worked at Opcode Systems before leaving for Paris, serving as Product Manager for Max, OMS and MIDI hardware.\n\nRovan has received prizes from the Bourges International Electroacoustic Music Competition, first prize in the Berlin Transmediale International Media Arts Festival, and has contributed writing to numerous books and journals. His music appears on Wergo, EMF, Circumvention, and SEAMUS labels. Rovan's research includes sensor hardware design and wireless microcontroller systems. In 2019 he received a patent with Peter Bussigel for a new electronic musical instrument design.",
                "date_modified": "2024-03-27T21:43:18.420132+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 615,
                        "forum_user": 4055,
                        "date_start": "2014-08-19",
                        "date_end": "2024-11-06",
                        "type": 0,
                        "keys": [
                            {
                                "id": 114,
                                "membership": 615
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "butchrovan",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2742,
                    "user": 4057,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "making-electronic-music-inclusive-a-virtual-studio-for-visually-impaired-composers",
        "pk": 2742,
        "published": true,
        "publish_date": "2024-02-15T23:17:26+01:00"
    },
    {
        "title": "DAFNE+ workshop: Minting and Versioning Content on the Platform with Hugues Vinet, Greg Beller and Guillaume Piccarreta",
        "description": "DAFNE+ propose aux créateurs de contenus numériques de nouvelles formes de création, de distribution et de monétisation de leurs œuvres d'art grâce à la technologie blockchain. Cet atelier, donné dans le cadre des Ateliers du Forum IRCAM @Paris 2025, vous accompagne dans les principales opérations de sa plateforme, notamment le téléchargement, le versioning et le maintint de vos contenus, ainsi que la participation à la gouvernance grâce aux fonctionnalités de la DAO.",
        "content": "<p></p>\r\n<h1><span style=\"vertical-align: inherit;\"><span style=\"vertical-align: inherit;\">DAFNE+ :&nbsp;Minting and Versioning Content&nbsp; &nbsp; &nbsp;</span></span></h1>\r\n<p><span style=\"font-size: 30px;\">JEUDI 27 MARS - IRCAM dans la salle Shannon de 14h &agrave; 15h30.<br />La plateforme DAFNE+ est con&ccedil;ue pour r&eacute;pondre aux besoins &eacute;volutifs des cr&eacute;ateurs de contenus num&eacute;riques, en leur fournissant des outils innovants pour la cr&eacute;ation, la distribution et la mon&eacute;tisation de leurs &oelig;uvres artistiques &agrave; travers la technologie blockchain. &laquo; L'un des principaux objectifs du projet est de rendre la distribution de contenu &eacute;quitable &raquo;. <br />De mani&egrave;re intuitive et simple, sans avoir besoin de connaissances techniques en blockchain/NFT, les communaut&eacute;s cr&eacute;atives sont invit&eacute;es &agrave; rejoindre l'organisation autonome d&eacute;centralis&eacute;e (DAO) offrant de nouveaux services et outils qui permettent la cr&eacute;ation et la cocr&eacute;ation de contenu dans une blockchain. La recherche de DAFNE+ se concentre &eacute;galement sur la d&eacute;finition de nouveaux mod&egrave;les d'affaires &agrave; travers la distribution de contenu, permettant aux cr&eacute;ateurs et aux utilisateurs de mon&eacute;tiser les cr&eacute;ations multim&eacute;dias. <br />Le r&ocirc;le de l'IRCAM dans DAFNE+ est notamment d'organiser une communaut&eacute; d'artistes et de fournisseurs de technologies sur la musique et le son &eacute;lectroniques. A mi-chemin entre le Forum de l'IRCAM et Sidney, l'archive du r&eacute;pertoire musical interactif, et bas&eacute;e sur une organisation autonome et une infrastructure distribu&eacute;e, la plateforme permettra aux artistes, chercheurs et ing&eacute;nieurs de partager et de mon&eacute;tiser des &eacute;l&eacute;ments de technologie pour la production de musique et d'&oelig;uvres d'interpr&eacute;tation - biblioth&egrave;ques, patchs, documentations...Workshop Preparation:</span></p>\r\n<p></p>\r\n<p></p>\r\n<p>Veuillez apporter tout contenu (image, banque de sons, mod&egrave;le AI, patch de production ou de performance) que vous souhaitez partager sous la licence CC-BY-NC, y compris &eacute;ventuellement des versions diff&eacute;rentes de r&eacute;alisations (patch d'&oelig;uvre par exemple).</p>\r\n<h2 id=\"workshop-agenda\">D&eacute;roul&eacute;</h2>\r\n<ul>\r\n<li><span>Introduction to DAFNE+ project</span></li>\r\n<li><span>Practical workshop - Mint and version a content</span></li>\r\n<li><span>Feedback round and discussions</span></li>\r\n<li><span>Wrap-up and what&rsquo;s next&hellip;</span></li>\r\n</ul>\r\n<h2 id=\"links\">Liens:</h2>\r\n<ul>\r\n<li><span>Website:<span>&nbsp;</span></span><a href=\"https://dafneplus.eu/\"><span>https://dafneplus.eu</span></a></li>\r\n<li><span>Platform:<span>&nbsp;</span></span><a href=\"https://dafneplus.eng.it/\"><span>https://dafneplus.eng.it</span></a></li>\r\n<li><span>Discord:<span>&nbsp;</span></span><a href=\"https://discord.gg/aR6VvV9Ttw\"><span>https://discord.gg/aR6VvV9Ttw</span></a></li>\r\n<li><span>Survey:<span><span> <a href=\"https://forms.gle/2LcB5owCHJteZFub6\">https://forms.gle/2LcB5owCHJteZFub6</a></span></span></span></li>\r\n<li><span>YT tutorials playlist:&nbsp;<a href=\"https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ\">https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ</a></span></li>\r\n<li><span>Newsletter:<span>&nbsp;</span></span><a href=\"https://dafneplus.eu/contact\"><span>https://dafneplus.eu/contact</span></a></li>\r\n<li>Contact:<span>&nbsp;</span><a href=\"mailto:info@dafneplus.eu\">info@dafneplus.eu</a><a href=\"mailto:info@dafneplus.eu\"></a></li>\r\n<li>Pr&eacute;sentation:<span>&nbsp;</span><a href=\"https://forum.ircam.fr/article/detail/dafne-launch-of-the-platform-for-the-preservation-and-promotion-of-experimental-music-and-sound-production\">https://forum.ircam.fr/article/detail/dafne-launch-of-the-platform-for-the-preservation-and-promotion-of-experimental-music-and-sound-production</a></li>\r\n</ul>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: 402px; top: 1276.46px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2551,
                "name": "deleuze",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1856,
                "name": "platform",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dafne-workshop-minting-and-versioning-content-on-the-platform-with-hugues-vinet-greg-beller-and-guillaume-piccarreta",
        "pk": 3319,
        "published": true,
        "publish_date": "2025-03-05T12:42:28+01:00"
    },
    {
        "title": "Embeddings II by Roberto Becerra",
        "description": "A sound art installation by Roberto Becerra, 26 Sept. 2025, Liepaja (Latvia)",
        "content": "<p>Embeddings II&nbsp; continues the exploration begun in Embeddings I, a work that examines sound through the lens of information theory and its capacity to construct perceived realities&mdash;and perhaps even material ones.</p>\r\n<p>As vector embeddings encode contextual nuances in textual meaning, Embeddings II investigates how perceptual meaning can be modulated through sound and other induced informational elements.</p>\r\n<p>The installation features intentionally ambiguous speech, whose interpretation shifts depending on accompanying &ldquo;extrinsic&rdquo; sonic cues. Through this dynamic interplay, Embeddings II draws parallels between machine learning representations, neural correlates, and the intrinsic informational content of sound&mdash;framed within a materialist perspective.</p>\r\n<p style=\"text-align: center;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e6b92ef471efb336e4f0562018666eab.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: center;\">image generated by the artist</p>\r\n<p style=\"text-align: left;\"><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 3353,
                "name": "becerra",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2736,
                "name": "Forum 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1707,
                "name": "installation sonore",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3350,
                "name": "latvia",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3351,
                "name": "liepaja",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3352,
                "name": "roberto",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 25233,
            "forum_user": {
                "id": 25206,
                "user": 25233,
                "first_name": "Roberto",
                "last_name": "Becerra",
                "avatar": "https://forum.ircam.fr/media/avatars/Screenshot_2025-08-21_at_00.28.22.png",
                "avatar_url": "/media/cache/9a/64/9a64747c999aa0939c7b9accb0855d2f.jpg",
                "biography": "Roberto Becerra (México) is a sound artist, acoustician, engineer, and multidisciplinary artist. He has education as BSc. in Mechatronics Engineering (ITESM, Mexico. 2009), MSc. in Acoustics and Music Technology (University of Edinburgh, 2011), Digital Signal Processing (University of California San Diego, 2014), Machine Learning (Stanford Online, 2024). \n\nHe is an Assistant Lecturer at the Department of Composition of the Lithuanian Academy of Music and Theatre, as part of the Music Studies Innovation Centre. He specializes in topics of spatial audio, computer and electronic music, and sound art. Roberto also acts as manager of a section, at the LMTA, of the FilmEU project, where he also is coordinator of the joint MA Resound. \n\nRoberto is also Co-founder of Idėjų blokas LT, VšĮ, which created Ideas Block - creative space and cafe (2018), cultural map app Arttice (2020), and Ideas Block - Kompresorinė, cultural space (2022).\n\nIn his artistic practice, Roberto focuses on topics of materiality of sound and music, language, space, communication, politics and perception. He uses sound art installation, experimental music and multidisciplinary expressions as artistic language.",
                "date_modified": "2025-10-08T14:13:35.100185+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "iorobertob-gmail-com",
            "first_name": "Roberto",
            "last_name": "Becerra",
            "bookmarks": []
        },
        "slug": "embeddings-ii",
        "pk": 3662,
        "published": true,
        "publish_date": "2025-09-03T09:54:09+02:00"
    },
    {
        "title": "Gears, une application web pour créer de la musique autrement - Clément Bossut, Pierre Lambla",
        "description": "Qu'est-ce que Gears, d'où vient-il, que pourrait-il devenir ?",
        "content": "<p><span><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par : Cl&eacute;ment Bossut,&nbsp;Pierre Lambla<br /><a href=\"https://forum.ircam.fr/profile/cbossut/\">Biographie</a></span></p>\r\n<p><span></span></p>\r\n<p><strong>Gears est une interface intuitive d&eacute;di&eacute;e &agrave; la composition musicale d&eacute;velopp&eacute;e par Pierre Lambla et Cl&eacute;ment Bossut.</strong></p>\r\n<p></p>\r\n<p>Son principe repose sur une synchronisation harmonique d'&eacute;chantillons sonores. A rebours du cheminement de Pythagore qui d&eacute;duisit l'arithm&eacute;tique de l'observation des r&eacute;sonances harmoniques, Gears propose d'&eacute;tablir des relations rationnelles entre des &eacute;chantillons sonores pour les harmoniser.</p>\r\n<p>Il est ainsi possible d'acc&eacute;der aux r&eacute;sonances harmoniques naturelles de n'importe quel son, tant sur le plan m&eacute;lodique que rythmique, et de synchroniser les sons tr&egrave;s facilement ; ce qui fait de Gears un outil original d'exploration du champ de la composition pour les d&eacute;butants comme pour les cr&eacute;ateurs de sons confirm&eacute;s.</p>\r\n<p>L'interface se compose d'une biblioth&egrave;que de sons, d'une page blanche o&ugrave; se d&eacute;roule la composition, et d'un ensemble minimal d'outils.</p>\r\n<p>Les &eacute;chantillons sonores, une fois gliss&eacute;s sur le plan de travail, se transforment en roues sonores dont la taille est proportionnelle &agrave; leur dur&eacute;e.</p>\r\n<p><span>Vous pouvez alors essentiellement&nbsp;:</span></p>\r\n<p>- relier les roues entre elles par des courroies pour les faire jouer simultan&eacute;ment ;</p>\r\n<p>- changer leur taille en modifiant leur dur&eacute;e et leur hauteur - ou timestretch leur contenu ;</p>\r\n<p>- les harmoniser en &eacute;tablissant des rapports rationnels entre elles ;</p>\r\n<p>- s&eacute;quencer librement des roues ensemble comme des colliers de perles g&eacute;n&eacute;rant des roues plus grandes contenant la s&eacute;quence.</p>\r\n<p>Ces caract&eacute;ristiques font de Gears un outil musical tr&egrave;s original qui conduit l'utilisateur directement aux principes naturels de l'harmonie, mais l'invite (lui permet) &eacute;galement de composer de la musique sans contrainte m&eacute;tronomique, ni grille carr&eacute;e, en faisant des sons et des phrases musicales authentiques le substrat de toute synchronisation.</p>\r\n<p>Gears est actuellement une application web accessible depuis n'importe quel navigateur.</p>\r\n<p>Elle a &eacute;t&eacute; d&eacute;velopp&eacute;e par L'Upito, avec le soutien de la DRAC et de la R&eacute;gion Centre Val de Loire.</p>\r\n<p>&nbsp;</p>\r\n<p>L'essentiel de la conf&eacute;rence &agrave; venir sera de pr&eacute;senter Gears et la communaut&eacute; du Forum IRCAM l'un &agrave; l'autre, afin d'essayer de voir comment ils pourraient s'entendre. La s&eacute;ance de questions sera ensuite le meilleur moyen de d&eacute;couvrir cet outil selon les points de vue de chacun.</p>\r\n<p>Cependant, il est n&eacute;cessaire d'expliquer et de montrer les bases et les fondamentaux au pr&eacute;alable.</p>\r\n<p>&nbsp;</p>\r\n<p>L'expos&eacute; commencera donc par une pr&eacute;sentation rapide des deux chercheurs artistico-scientifiques &agrave; l'origine de Gears, afin d'aborder rapidement les principes profonds qui sous-tendent cet outil simple.</p>\r\n<p>Je montrerai une utilisation standard du logiciel &agrave; des fins de d&eacute;couverte, tout en discutant des grandes id&eacute;es que nous voulons v&eacute;hiculer par notre conception.</p>\r\n<p>Ensuite, divers cas d'utilisation seront montr&eacute;s et entendus, afin d'ouvrir la voie vers la diversit&eacute; de la cr&eacute;ation accessible.</p>\r\n<p>Apr&egrave;s une &eacute;vocation des fonctionnalit&eacute;s en cours, nous terminerons par une courte pi&egrave;ce de Pierre Lambla, en gardant, je l'esp&egrave;re, suffisamment de temps pour entendre autant de pens&eacute;es que l'auditoire a &agrave; exprimer.</p>\r\n<p><span></span></p>\r\n<p><span><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></span></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 66764,
            "forum_user": {
                "id": 66694,
                "user": 66764,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ab86a1b0e35fef7c6ca8f043e5b6892c?s=120&d=retro",
                "biography": "Feel free to reach me !\nMy email is of the form lastname.firstname@gmail.com\n\nClément Bossut is a french performing artist and informatician.\nHe studied musicology and informatics at Sorbonne University.\nAfter working as an engineer on the OSSIA Project for SCRiME/LaBRI at Bordeaux University, he met Nicolas Villenave, creator of the spectacular light installation Le Chant du Filament, and worked on this project to include algorithmic procedures in the control tool for the show. He then was entrusted with the creation of Kara da Kara, a duet between a Japanese contemporary dancer and choreographer and the light installation, a show which is orchestrated by OSSIA.\nHe also worked with Cie Léna d’Azy to create the autonomous writing and playing tool for the piece Columbia Circus, and with Cie Lubat for the show Robot pour être vrai, which put robotic arms and Metabots at play on stage with improvisator Bernard Lubat.\nSimultaneously, he has been working with Parti Collectif since its creation in 2014, a transdisciplinary artistic collective known in Bordeaux for improvised music and music theatre.\n\nGears Lead Developper",
                "date_modified": "2024-03-26T15:58:38.163205+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cbossut",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2833,
                    "user": 66764,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "gears-a-web-app-to-create-music-differently",
        "pk": 2833,
        "published": true,
        "publish_date": "2024-03-15T02:01:25+01:00"
    },
    {
        "title": "\"Musikmaschine\" by Astrid Drechsler",
        "description": "This students project of FH JOANNEUM combines traditional instruments such as the zither, dulcimer with contemporary electronic music and a easy to understand public interface. It was developed for the Cultural Capital of Heritage 2024 in Austria.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p></p>\r\n<p>Presented by Astrid Dechsler</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/astdrechsler/\" target=\"_blank\">Biography</a></p>\r\n<p><img alt=\"Musikmaschine Bad Ischl\" src=\"https://forum.ircam.fr/media/uploads/user/bcc8eb185833530cdc4b2d4ddaf16e71.jpg\" width=\"1467\" height=\"1956\" /></p>\r\n<p>The sound characteristics of instruments from the region Salzkammergut are paired with new and unusual styles of playing. First of all, the installation &ldquo;Musikmaschine&rdquo; deals with the ma- terials of traditional instruments such as wood, metal, and stone. Using coding, electromechanical, and digital elements, these materials be- come sound generators. Thus, new materialities and synergies between the characteristics of regional instruments and the style of electronic music evolve. Visitors can operate the &ldquo;Musikmaschine&rdquo; via an inter- face. They can create short audio compositions and as a result build a bridge between tradition and innovation. For this purpose, seven different sound generators were developed and built for the \"Mus- ikmaschine\" using old used and partly broken instruments. Controlled by microcontrollers, solenoids and exciters stimulate the material and produce sounds.&nbsp;</p>\r\n<p>During the IRCAM Forum Workshops, there will also be a performance from students of FH JOANNEUM with the machine . They work with the materials of the traditional instruments, which are ar- ranged into beats and combine them with atmospheric soundscapes and multi-layered synthesizer sounds.<br />Project supervision and implementation: Astrid Drechsler, Daniel Fabry<br />Concept and prototype by students of the Master's degree programs in Sound Design and Interaction Design: Nagyija Bog&aacute;s, Carolin da Silva Nasseri, Theresa Dietinger, Florian Drechsel-Burkhard, Valerie Feicht- mair, Johanna Gumpelmeyer, Anine Hall&eacute;n, Madeleine Hurst, Andrea Hurtado Torres, Daniil Ivanov, Georg Kiraly, Edwin Lang, Nadine Lowden, Helena Opower, Andrea Ortner, Nadja Pirchheim, Oliver Posmayer, Korin Rizzo, Yao Sossou, Alexander Wildinger, Weronika Wrzosek, Lea Zucker Umsetzung: Astrid Drechsler, Daniel Fabry, Andreas Heller, Edwin Lang, Jakob Pock, Korin Rizzo</p>\r\n<p>Performance: Mahtab Miandehi, Hannah Albrecht, Anna Semmelrath, Francisco Sylla</p>",
        "topics": [
            {
                "id": 2682,
                "name": "Bela board",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2683,
                "name": "Controllino",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2685,
                "name": "cultural capital of heritage 2024",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2681,
                "name": "FH JOANNEUM",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2684,
                "name": "max msp",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 886,
                "name": "pure data",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 19646,
            "forum_user": {
                "id": 19639,
                "user": 19646,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f2401be53f99843660ba64b76a812704?s=120&d=retro",
                "biography": "Sound Design Master study program of FH JOANNEUM, \nUniversity of Applied Sciences Graz\nThe designing of and with sound forms the core of the interuniversity study track Sound Design, part of the study programme Communication, Media, Sound and Interaction Design. Detailed knowledge of the artistic design, media- enabled preparation, and technical processing of sound, as well as semantic and psychoacoustic perception, is developed. Sound designers create and edit sounds for movies, computer games, audio logos, and brand jingles. They make data audible and optimize the sound of products. In cooperation with the Graz University of Music and Performing Arts, the programm also addresses the acoustic environment, soundscapes, and sound ecology. The study program it taught in English.",
                "date_modified": "2025-03-22T08:35:44.947717+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 269,
                        "forum_user": 19639,
                        "date_start": "2026-03-13",
                        "date_end": "2027-03-13",
                        "type": 0,
                        "keys": [
                            {
                                "id": 771,
                                "membership": 269
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "astdrechsler",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "musikmaschine",
        "pk": 3308,
        "published": true,
        "publish_date": "2025-02-25T22:46:03+01:00"
    },
    {
        "title": "Her Writing - Yuhan Chen, Yaohan Chen, Yiqing Tian, Yang Chen",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>Her Writing is an immersive VR digital animation. The work uses the method of feminine writing, combining VR, digital sculpture and poetry. Using the female body as a natural scene, the work presents the common status and situation of women and nature created by the frame of consumption alienation and patriarchal society - prosperity from afar, decay up close. We try to write in a chaotic, non-linear language of jumping thoughts and feelings.</p>\r\n<p><br /><a href=\"https://www.behance.net/gallery/157881609/Her-Writing \">I have put the sound clip in the attachment, please check them out.</a></p>",
        "topics": [],
        "user": {
            "pk": 32954,
            "forum_user": {
                "id": 32906,
                "user": 32954,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/328d8b6402857624d70d2fed8d6168db?s=120&d=retro",
                "biography": "",
                "date_modified": "2023-01-31T05:06:49+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "yuhanchen",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "her-writing",
        "pk": 2091,
        "published": true,
        "publish_date": "2023-02-28T15:54:18+01:00"
    },
    {
        "title": "Immersive Telematic Performance by Randall Packer",
        "description": "Telematic Theater is a re-imagined online performance platform merging techniques of live theater, gaming, and cinema into a seamless telematic experience. A project of the Third Space Network, Telematic Theater was conceived by Randall Packer with software design by Théophile Clet, Federico Foderaro, and Matthew Ostrowski. Telematic Theater realizes the virtual stage as an immersive, networked environment where remote performers and audiences converge in a dynamic 3D audio-visual world. Through real-time compositing, green-screened performers inhabit 3D virtual scenic spaces, interacting with layered digital environments of lighting, sound, and cinematic camera movement.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/92eeb260773f6d4afad1d1b4b9d236da.png\" width=\"937\" height=\"527\" /></p>\r\n<p>Presented by: Randall Packer, Th&eacute;ophile Clet and Federico Foderaro</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/rpacker/\">Profile for Randall Packer</a><br /><a href=\"https://vimeo.com/1062441804\">Telematic Theater Trailer</a></p>\r\n<p>At its core, Telematic Theater features the Audio-Visual Panner, a spatialization tool that synchronizes 3D performer positioning in spherical environments with 3D ambisonic and binaural sound. This correlation between sound, image, movement and space heightens the impact of telepresence, providing distributed audiences a powerful immersive experience as they view the performance within a digital volumetric space.</p>\r\n<p>Telematic Theater operates as a virtual staging and rehearsal environment for the composer/scenographer to coordinate hundreds of stage parameters through precision presets and animated interpolation. The space is shaped by the compositional control of performer positioning, virtual camera choreography, along with background/foreground scenic layering, spotlight control, and extensive modulation of visual effects.</p>\r\n<p>Telematic Theater ultimately blurs the boundaries between live performance, game interactivity, and cinematic storytelling, offering a hybrid tool that is an entirely unique form of performance stagecraft. Our approach is designed to redefine the collaborative relationship between director, artist, engineer, performer, and audience through the re-invention of the online performance space with each production. It also extends the reach of performance beyond geographic constraints, fostering new avenues for audience engagement. We believe the Telematic Theater represents the theater of the future, ushering in a new era of global, networked performance art.</p>\r\n<p>For more information visit the<span>&nbsp;</span><a href=\"https://thirdspacenetwork.com/telematic-theater/\">Third Space Network Website</a>.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/805826dfb54395d3de5897df3f531e6b.png\" width=\"1122\" height=\"631\" /></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2151,
            "forum_user": {
                "id": 2149,
                "user": 2151,
                "first_name": "RANDALL",
                "last_name": "Packer",
                "avatar": "https://forum.ircam.fr/media/avatars/Randall_Packer-Headshot.jpeg",
                "avatar_url": "/media/cache/31/42/3142121f9bc166a4c1ffc9111730ff69.jpg",
                "biography": "Randall Packer is a media artist, composer, writer, and educator working at the intersection of networked performance and immersive sound. He is Artistic Director of Zakros InterArts, a fully online alternative arts organization based in Washington DC, and has overseen the creation of the Telematic Theater, a networked platform for the creation of online performance and experimental broadcast forms that connect studios, performers, and audiences across distance in real time. \n\nPacker’s work bridges music composition, technology, media theory, and dramaturgy. He holds a Ph.D. in Music Composition from the University of California, Berkeley, an M.F.A. in Music Composition from the California Institute of the Arts, and a Certificate in Computer Music from IRCAM/Centre Pompidou. \n\nPacker’s practice, for more than 30 years, has advanced a coherent throughline: to couple aesthetic inquiry with technical rigor in order to deliver immersive, participatory performance beyond the confines of the traditional theater venue. He brings a collaborative practice aimed at networking, leading to a shared, open toolkit for collaborating artists and engineers working in immersive multimedia practices.",
                "date_modified": "2026-03-14T17:24:31.358159+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 488,
                        "forum_user": 2149,
                        "date_start": "2023-10-09",
                        "date_end": "2026-10-26",
                        "type": 0,
                        "keys": [
                            {
                                "id": 54,
                                "membership": 488
                            },
                            {
                                "id": 233,
                                "membership": 488
                            },
                            {
                                "id": 816,
                                "membership": 488
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "rpacker",
            "first_name": "RANDALL",
            "last_name": "Packer",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 38,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 387,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 599,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 394,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 98,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 38,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 645,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 492,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 487,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 613,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 111,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 277,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2516,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 117,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 2151,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3218,
                    "user": 2151,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "immersive-telematic-performance",
        "pk": 3334,
        "published": true,
        "publish_date": "2025-03-06T17:56:51+01:00"
    },
    {
        "title": "bytebeat custom chip by proppy (Japan)",
        "description": "This workshop introduce a custom (and open source) chip capable of reproducing the sound from the original bytebeat video (and more!)",
        "content": "<p></p>\r\n<p>This chip implements a digital sound processor that produce a 8 bit waveform using the formula from the original bytebeat <a href=\"https://www.youtube.com/watch?v=tCRPUv8V22o\">video</a>.</p>\r\n<p>It features 4 x 4 bit parameters that alter the constants of the original formula, creating a suprising wide audio space for the operator to explore.</p>\r\n<p>It was designed using the <a href=\"https://github.com/google/xls\">XLS: Accelerated Hardware Synthesis</a> toolkit and fabricated w/ the Skywater 130nm process using the <a href=\"https://tinytapeout.com/\">TinyTapeout</a> service.</p>\r\n<p>The source code and fabrication files for the projects are available at <a href=\"https://github.com/proppy/tt05-bytebeat\">https://github.com/proppy/tt05-bytebeat</a>.</p>\r\n<p><img alt=\"picture of bytebeat chip with its development board\" src=\"https://forum.ircam.fr/media/uploads/user/d539476f5a2c8afd035fa79b2148991a.jpg\" /><img alt=\"3d view of the bytebeat chip layers\" src=\"https://forum.ircam.fr/media/uploads/user/5ae062d7b9b6e458924e265d82922858.jpg\" /></p>",
        "topics": [
            {
                "id": 3442,
                "name": "bytebeat",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3444,
                "name": "opensource",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3443,
                "name": "silicon",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 133731,
            "forum_user": {
                "id": 133556,
                "user": 133731,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/15f28d4f784bfde6231baada6f4432a9?s=120&d=retro",
                "biography": "Based in Tokyo, playing with Open Source Silicon and modular synthesis.",
                "date_modified": "2025-09-16T02:17:21.174365+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "proppy",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "bytebeat-custom-chip-by-proppy-japan",
        "pk": 3738,
        "published": true,
        "publish_date": "2025-10-03T10:33:54+02:00"
    },
    {
        "title": "Tisser la matière de la mémoire : Pilotage de modèles audio latents par l'apprentissage automatique interactif - Gabriel Vigliensoni",
        "description": "\"Weaving memory matter\" est une démonstration et une performance où je montrerai la dirigeabilité et le contrôle des modèles de synthèse audio neuronale par le biais de l'apprentissage automatique interactif. Le contrôle en temps réel des systèmes de synthèse audio neuronale est important car il permet aux interprètes d'introduire la cohérence temporelle à long terme qui fait souvent défaut dans ces systèmes.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par:&nbsp;Gabriel&nbsp;Vigliensoni<br /><a href=\"https://forum.ircam.fr/profile/vigliensoni/\">Biographie</a><br /><br /></p>\r\n<p>\"Weaving memory matter\" est une d&eacute;monstration et une performance o&ugrave; je montrerai la dirigeabilit&eacute; et le contr&ocirc;le des mod&egrave;les de synth&egrave;se audio neuronale par le biais de l'apprentissage automatique interactif. Le contr&ocirc;le en temps r&eacute;el des syst&egrave;mes de synth&egrave;se audio neuronale est important car il permet aux interpr&egrave;tes d'introduire la coh&eacute;rence temporelle &agrave; long terme qui fait souvent d&eacute;faut dans ces syst&egrave;mes.</p>\r\n<p>Les progr&egrave;s r&eacute;cents dans le domaine de la synth&egrave;se audio neuronale, tels que l'architecture RAVE (Caillon et Esling 2021), ont permis d'am&eacute;liorer la g&eacute;n&eacute;ration de signaux audio en temps r&eacute;el. RAVE s'attaque aux probl&egrave;mes des syst&egrave;mes pr&eacute;c&eacute;dents, notamment la grande complexit&eacute; de calcul, la mauvaise qualit&eacute; du signal et le manque de coh&eacute;rence temporelle lors de la mod&eacute;lisation de signaux audio polyphoniques complexes. Il rem&eacute;die &eacute;galement au manque de moyens d'interaction. Ces progr&egrave;s ont facilit&eacute; l'utilisation de ces mod&egrave;les en temps r&eacute;el. Toutefois, compte tenu de la grande dimensionnalit&eacute; potentielle de l'int&eacute;gration apprise et de l'absence d'&eacute;tiquettes pour les axes de l'espace latent, il est crucial de trouver une meilleure m&eacute;thode pour l'interaction et la performance en temps r&eacute;el.</p>\r\n<p>Dans cette d&eacute;monstration, je pr&eacute;senterai une m&eacute;thode utile pour diriger des mod&egrave;les audio neuronaux &agrave; l'aide de l'apprentissage automatique interactif. Cette approche permet &agrave; l'interpr&egrave;te de mettre en correspondance l'espace de performance humaine bien connu et de faible dimension avec l'espace latent de haute dimension d'un mod&egrave;le audio g&eacute;n&eacute;ratif. Cette correspondance est apprise gr&acirc;ce &agrave; un ensemble d'entra&icirc;nement contenant des emplacements appari&eacute;s des deux espaces.</p>\r\n<p>Au cours de la d&eacute;monstration, mon processus comprendra : (i) l'exploration de l'espace latent d'un mod&egrave;le RAVE pr&eacute;-entra&icirc;n&eacute; pour identifier les points de potentiel cr&eacute;atif ; (ii) la s&eacute;lection de points sources dans l'espace de performance qui correspondent &agrave; des points cibles dans l'espace latent ; (iii) la r&eacute;p&eacute;tition de ces &eacute;tapes en fonction des qualit&eacute;s sonores d&eacute;couvertes ; et (iv) l'utilisation d'un algorithme de r&eacute;gression pour apprendre une correspondance entre les points dans les deux espaces. Ce processus peut &ecirc;tre r&eacute;p&eacute;t&eacute; si n&eacute;cessaire pour ajuster la cartographie.</p>\r\n<p>Dans la performance de d&eacute;monstration \"Weaving Memory Matter\", mon objectif est de d&eacute;montrer comment nous pouvons r&eacute;cup&eacute;rer le contr&ocirc;le artistique sur les syst&egrave;mes g&eacute;n&eacute;ratifs d'IA en entra&icirc;nant des mod&egrave;les personnalis&eacute;s sur des donn&eacute;es conserv&eacute;es et en dirigeant le processus g&eacute;n&eacute;ratif. La performance utilise un mod&egrave;le RAVE entra&icirc;n&eacute; sur une partie des archives sonores du Museo de la Memoria y los Derechos Humanos de Santiago du Chili. Les technologies utilis&eacute;es sont RAVE, nn~, Facemesh, FluCoMa et Max.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2681,
            "forum_user": {
                "id": 2679,
                "user": 2681,
                "first_name": "Gabriel",
                "last_name": "Vigliensoni",
                "avatar": "https://forum.ircam.fr/media/avatars/76200002.jpg",
                "avatar_url": "/media/cache/91/7b/917b71fb3fb129fd7f608faf3feb5df6.jpg",
                "biography": "Gabriel Vigliensoni is an electronic music artist, performer, and researcher whose work currently explores the creative affordances of the machine learning paradigm in the context of sound- and music-making. His practice merges formal musical training with extensive studies and experience in sound recording, music production, music information retrieval, human-computer interaction, and machine learning to explore and develop novel approaches to music composition and performance. Vigliensoni's creative work and research have been showcased internationally at venues and conferences such as CCA (QC), CMMAS (MX), MUTEK (CL, QC), ICCC (CA, PT), IKLECTIK (GB), ISEA (CA), NMF (GB), NIME (US), ICMC (US), and ISMIR (BR, CN, FR, US, NL). He earned a PhD in Music Technology from McGill University and currently serves as an Assistant Professor in Creative Artificial Intelligence in the Department of Design and Computation Arts at Concordia University.",
                "date_modified": "2025-09-26T18:09:57.326652+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "vigliensoni",
            "first_name": "Gabriel",
            "last_name": "Vigliensoni",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2783,
                    "user": 2681,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 2681,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "weaving-memory-matter-steering-latent-audio-models-through-interactive-machine-learning",
        "pk": 2783,
        "published": true,
        "publish_date": "2024-03-02T03:13:38+01:00"
    },
    {
        "title": "Liminal: A Human–AI Interaction Space Between Control and Autonomy by Zhitao Lin",
        "description": "Liminal investigates new modes of human–AI co-creation through embodied gesture and generative audiovisual systems. Drawing on the concept of liminality and traditional Chinese aesthetics, the installation reframes interaction as a negotiated temporal process rather than explicit gesture-to-output mapping. Real-time computer vision and AI-driven synthesis enable an evolving environment where human intention and machine agency coexist and transform one another.",
        "content": "<h5 id=\"➡️-this-presentation-is-part-of-ircam-forum-workshops-paris-engh\"><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<p>&nbsp;</p>\r\n<p><em>Liminal</em> is an interactive audiovisual installation that explores a specific interaction space between human participants and an AI-driven system. Rather than positioning the human as a controller or the AI as an autonomous generator, the work investigates what happens in the intermediate state where agency is shared, negotiated, and continuously reconfigured over time.</p>\r\n<p>Many contemporary interactive music and media systems rely on explicit gesture-to-parameter mappings, producing immediate and predictable responses. In such models, interaction is framed as control, and the system functions as an instrument. Conversely, fully autonomous AI systems minimize human influence, framing the machine as an independent creator. <em>Liminal</em> is situated deliberately between these two paradigms, proposing interaction as an evolving process rather than a sequence of commands or automated outputs.</p>\r\n<h3><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/49b1a3685db6a7cb12021ffd1cca78dd.jpg\" /></strong></h3>\r\n<h3><strong>Defining the Liminal Interaction Space</strong></h3>\r\n<p>&nbsp;</p>\r\n<p>In this project, <em>liminal</em> refers neither to a metaphor nor to a poetic abstraction. It designates a <span><strong>concrete interaction state and operational space between human input and AI behavior</strong></span>. Within this space, neither the human nor the machine fully determines the outcome of the interaction. Instead, audiovisual results emerge through continuous negotiation across time.</p>\r\n<p>Human gestures are not treated as instructions. Likewise, the AI system does not act independently of human presence. Gestural input functions as contextual information that influences the system&rsquo;s internal decision-making processes without dictating them. The system, in turn, maintains its own temporal coherence and behavioral continuity, responding to human presence while preserving structural autonomy.</p>\r\n<p>This liminal state exists precisely because the system resists collapsing into either direct control or full automation. Interaction unfolds through gradual modulation, accumulation, and transformation, rather than instantaneous cause-and-effect relationships.</p>\r\n<hr />\r\n<h3><strong>Liminality as a Temporal Condition</strong></h3>\r\n<p>&nbsp;</p>\r\n<p>A defining characteristic of the liminal interaction space in <em>Liminal</em> is its dependence on time. Gestures do not produce immediate outcomes; instead, they influence probabilities, tendencies, and trajectories that unfold across extended durations. The system incorporates temporal memory and decay mechanisms, allowing past interactions to shape future behavior without fixing results.</p>\r\n<p>This temporal structure shifts participation from momentary intervention to sustained engagement. Participants learn that influence is cumulative and indirect, encouraging attentive listening and adaptive movement rather than exploratory triggering. Interaction becomes a process of shaping conditions rather than producing events.</p>\r\n<hr />\r\n<h3><strong>System Architecture and Interaction Design</strong></h3>\r\n<p>&nbsp;</p>\r\n<p>The system is implemented as a real-time interactive engine connecting gesture perception, decision-making processes, and audiovisual synthesis within a continuous feedback loop.</p>\r\n<p>Gesture data is captured through computer vision&ndash;based analysis and processed into higher-level descriptors such as spatial distribution, motion continuity, velocity, and pause duration. These descriptors are interpreted as behavioral tendencies rather than discrete control values.</p>\r\n<p>A decision layer implemented in Python mediates between gestural tendencies and generative processes. This layer applies probabilistic weighting, temporal smoothing, and internal state tracking to ensure that interaction remains gradual and coherent over time. The system retains memory of prior states while allowing influence to decay, preventing abrupt shifts and preserving continuity.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8b9f2a20379cec0e955aef4c22b1e573.png\" /></p>\r\n<p>On the audio side, these evolving interaction states modulate generative music processes within a modular Max/MSP environment. Musical density, timbral emphasis, and spatial behavior are reshaped continuously, while the system maintains its own internal musical logic. Sound diffusion emphasizes spatial perception, reinforcing the embodied nature of interaction.</p>\r\n<p>Visual generation operates in parallel through a real-time generative pipeline. Interaction states influence transformation processes rather than triggering discrete images, resulting in continuously evolving ink-wash&ndash;inspired visuals combined with iridescent textural elements. Audio and visual modalities are synchronized at the level of interaction state rather than event-based triggering, allowing them to co-evolve as parts of a unified system.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f72005e0bc05b1417f3fa3caf1ec8b5f.jpg\" /></p>\r\n<hr />\r\n<h3><strong>Distributed Agency Between Human and AI</strong></h3>\r\n<p>&nbsp;</p>\r\n<p>Within the liminal interaction space, agency is distributed rather than centralized. Decision-making emerges from the interaction of multiple layers: human movement, gesture interpretation, probabilistic modeling, and audiovisual synthesis. No single layer fully governs the outcome.</p>\r\n<p>This distribution prevents both human dominance and machine autonomy. The human influences the system without being able to predict or control it fully. The AI system maintains behavioral coherence without asserting independence. Authorship is therefore continuously deferred, existing neither in the participant nor in the machine, but in the evolving interaction between them.</p>\r\n<hr />\r\n<h3><strong>Presentation Context and Observations</strong></h3>\r\n<p>&nbsp;</p>\r\n<p><em>Liminal</em> is presented as an open installation in which participants may enter and leave freely. Individual and collective interactions produce distinct audiovisual dynamics, revealing how the system responds differently to varied patterns of presence and movement.</p>\r\n<p>Across presentations, participants often transition from exploratory gestures toward more restrained and attentive actions. This behavioral shift reflects an understanding that influence operates over time rather than through immediate feedback. Each realization of the work develops differently, shaped by accumulated interaction histories rather than predefined scenarios.</p>\r\n<hr />\r\n<h3><strong>Questions Raised by Liminal Interaction</strong></h3>\r\n<p>&nbsp;</p>\r\n<p>The project raises broader questions relevant to interactive and AI-driven art practices:</p>\r\n<ul>\r\n<li>\r\n<p>How can interaction be designed without defaulting to control-based paradigms?</p>\r\n</li>\r\n<li>\r\n<p>What forms of authorship emerge when outcomes are temporally negotiated rather than directly triggered?</p>\r\n</li>\r\n<li>\r\n<p>How can AI systems participate in creative processes without imitating or replacing human expression?</p>\r\n</li>\r\n</ul>\r\n<p>&nbsp;</p>\r\n<p>By framing interaction as a liminal space between human intention and machine process, <em>Liminal</em> offers a practical model for addressing these questions through system design rather than conceptual abstraction.</p>\r\n<hr />\r\n<h3><strong>Conclusion</strong></h3>\r\n<p><em>Liminal</em> proposes a model of human&ndash;AI co-creation grounded in a clearly defined intermediate interaction space. By resisting both direct control and full automation, the work establishes interaction as a shared temporal process shaped by continuous negotiation. This approach offers an alternative framework for designing interactive systems in which humans and machines coexist as co-actors within an evolving creative environment.</p>",
        "topics": [
            {
                "id": 3462,
                "name": "AI & Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4088,
                "name": "Audiovisual Art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4089,
                "name": "Gesture Interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4090,
                "name": "Human–AI Co-Creation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4087,
                "name": "Interactive Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 153323,
            "forum_user": {
                "id": 153099,
                "user": 153323,
                "first_name": "Zhitao",
                "last_name": "Lin",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f259fa5fe4f4b1dc753d03e2b280d696?s=120&d=retro",
                "biography": "Zhitao Lin is a composer and new media artist working at the intersection of sound, computation, and cultural memory. His practice blends algorithmic composition, real-time interaction, and Chinese aesthetic philosophy-crafting immersive environments where sound and gesture co-evolve through intelligent systems.\nRooted in spectral techniques and Zen-inspired poetics, Lin's works reimagine music as an adaptive organism: a medium that listens, learns, and responds. From Al-driven audiovisual installations to gestural soundscapes, his projects explore the boundary between intuition and computation, tradition and abstraction.\nHis work has been featured internationally, including at ICMC 2025, SIGGRAPH Asia 2025 Real-Time Live, and other major festivals and media art venues.\nHe is currently a DMA candidate at the Peabody Institute of Johns Hopkins University, and holds a B.A. from UC Berkeley.",
                "date_modified": "2026-02-04T02:07:30.507554+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maxlzt",
            "first_name": "Zhitao",
            "last_name": "Lin",
            "bookmarks": []
        },
        "slug": "liminal-a-humanai-interaction-space-between-control-and-autonomy",
        "pk": 4269,
        "published": true,
        "publish_date": "2026-01-27T17:25:11+01:00"
    },
    {
        "title": "Somax2 version 2.6 is out!",
        "description": "This latest version includes brand new features to enhance dynamic control of your environment and to reach new forms of interaction.",
        "content": "<p>For the first time you will be able to record audio corpora in real-time into a somax.player, handle multi-region control on different areas of the corpus and synchronize events to beat phase alignment.</p>\r\n<p>The documentation, tutorials and helps has been updated for the entire package to guide you through these new functionalities.</p>\r\n<p>Detailed list of add-ons includes:</p>\r\n<ul>\r\n<li>\r\n<p><strong>Real-time Corpus Recording:</strong> A new object <code>somax.audiorecord</code> (and corresponding GUI object <code>somax.audiorecord.ui</code>) has been added, which allows recording new audio corpora directly (and in real-time) into a somax.player. This module also allows extending existing audio corpora by recording new material. The module is by default integrated in the somax.player.app interface. In addition, there's also a <a href=\"https://vimeo.com/showcase/10297257/video/923777971\">new Max tutorial and video tutorial on real-time corpus recording</a>. See the <code>somax.audiorecord</code> maxhelp for more details.</p>\r\n</li>\r\n</ul>\r\n<p><img style=\"height: 450px;\" alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1b827d3b50fdc9e9721fb7c98b1cbdae.png\" /> &nbsp; &nbsp; &nbsp;<img style=\"height: 450px;\" alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/680db6816cc3d2632062f5b9ac79052a.png\" />&nbsp; &nbsp; &nbsp;&nbsp;&nbsp; &nbsp; &nbsp;&nbsp;<br /><br /></p>\r\n<ul>\r\n<li><strong>Corpus Region Filter:</strong> you can now edit and select any among 6 regions from the corpus in a player&rsquo;s interface. A new object <code>somax.regions</code> has been added, which gives more detailed control over the different regions of the corpus. It's now possible to inidividually control up to 6 regions of the corpus, and the regions can be set exactly by time, in addition to relative proportion of the corpus. See the <code>somax.regions</code> maxhelp for more details.<br /><br /><img style=\"height: 550px;\" alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0610e54e33beca0c17eee6ce520900d3.png\" /> &nbsp; &nbsp; &nbsp;<img style=\"height: 550px;\" alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b11656c851029385d873ec17afeb8138.png\" />&nbsp; &nbsp; &nbsp;&nbsp;<br /><br /></li>\r\n</ul>\r\n<ul>\r\n<li>\r\n<p><strong>Beat Alignment:</strong> The <code>somax.player</code>'s algorithm for beat (phase) alignment has been rewritten to be able to more strictly adapt to an external or internal beat, and a number of new parameters for more precise control of the beat alignment have been added.</p>\r\n</li>\r\n<li>\r\n<p><strong>Windows Compatibility:</strong> Somax is now available on Windows! Download the <a href=\"https://github.com/DYCI2/Somax2/releases/download/v2.6.0/Somax-2.6.0-win64.zip\">Somax-2.6.0-win64.zip</a> file and follow the installation procedure in the readme / user guide.</p>\r\n</li>\r\n<li>\r\n<p><strong>Apple Silicon Compatibility:</strong> For Mac users, all externals have been updated to universal binaries, running on both Intel and ARM-based machines. Therefore, it's no longer necessary to run Somax under rosetta on ARM-based machines.</p>\r\n</li>\r\n<li>\r\n<p><strong>Various Bug Fixes:</strong> A number of bug fixes and clarifications have been added, as well as documentation updates</p>\r\n</li>\r\n</ul>\r\n<p>Goto to <a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">Somax2 Forum page</a> for installation</p>\r\n<p>See more at <a href=\"http://repmus.ircam.fr/somax2\">Somax2 Project Page </a></p>\r\n<p>Somax2 is an application for musical improvisation and composition using AI with machine listening, cognitive memory activation model, multi-agent architecture, full application interface to agent patching and control, and full Max library API. Somax2 is implemented in <a href=\"https://cycling74.com/products/max/\">Max</a> and Python and is based on a generative AI model to provide real-time machine improvisations coherent both with the internal selected corpus styles and with the unfolding external musical context. Somax2 handles both MIDI and audio input, corpus memory, and output. The model can be used with little configuration to let its agents autonomously interact with musicians (and one with another), but it also allows a variety of manual controls of its generative process and interaction strategies, effectively letting one use it as a fully flexible smart instrument.</p>",
        "topics": [
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1200,
                "name": "cocreativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 545,
                "name": "Repmus team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "somax2-version-26-is-out",
        "pk": 2838,
        "published": true,
        "publish_date": "2024-03-18T17:02:48+01:00"
    },
    {
        "title": "Tutorial: Neural Synthesis in a DAW with RAVE",
        "description": "Learn to perform neural audio synthesis inside your favorite digital audio workstation with RAVE.",
        "content": "<p>Always wanted to try out neural synthesis without any single line of code, or any patching? In this article, let's see how to play with RAVE inside digital audio workstations (Live, Logic, FLStudio, Cubase...) with the RAVE VST.</p>\r\n<h1>Video Tutorial</h1>\r\n<p><iframe width=\"425\" height=\"350\" src=\"//www.youtube.com/embed/HC0L5ZH21kw\"></iframe></p>\r\n<h1>Installation</h1>\r\n<p>All you need to put neural synthesis into your favorite DAW is - the RAVE audio plug-in (image), and - a model (image), that has been previously trained on a given dataset. Don&rsquo;t worry, you can download the IRCAM models directly inside the RAVE plug-in. To obtain the audio plug-in, please go to the <a href=\"https://forum.ircam.fr/projects/detail/rave-vst/\">RAVE VST Forum webpage</a> and download the installer that corresponds your platform.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a1eb17cc1a5d2a3a446fd8cf71158c78.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h1>How does RAVE work?</h1>\r\n<p>RAVE is an auto-encoder, meaning that it takes sound as an input, generates sound as an output, and trained to reconstruct the incoming sounds of the dataset. This processing is based on two separate processes</p>\r\n<ul>\r\n<li>an <strong>encoding</strong> process, where a given window of incoming audio (let say 2048 samples) is transformed into a set a <em>latent</em> variables (128 parameters in general)</li>\r\n<li>and a <strong>decoding</strong> process, that inverts these 128 latent variables back into sound.</li>\r\n</ul>\r\n<p>We can then describe the RAVE transformation process like this : RAVE translates incoming audio into a set of synthesis parameters, that are used to generate back the sound. As each model is trained on a limited sound of data (orchestral sounds, NASA sounds, ...), it will try to extract these parameters even if the input sound does not match the original database ; this is why RAVE can perform <em>timbre transfer</em>. By example : if RAVE has been trained on piano sounds, and is given a violin sound, it will try to extract synthesis parameters from it a generate it as a piano sound.</p>\r\n<p>This is also why you can use RAVE as an audio effect by transforming incoming audio, but also as a synthesizer by directly controlling these latent synthesis parameters. As 128 dimensions is way too much to be controlled manually, they are usually reduced to eight dimensions, that you can manipulate inside the VST.</p>\r\n<h1>Playing with RAVE inside the DAW</h1>\r\n<h2>Using RAVE as an effect</h2>\r\n<p>RAVE VST is an audio effect, as it can transform an audio input with a selected neural network. However, you can still use it as a synthesizer ; we will see that later. If you open the plug-in editor, you will see this interface :</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/831199ffccb996566da1282c21237034.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<ol>\r\n<li><strong>Model Selection Menu</strong> : where you select the playing RAVE model.</li>\r\n<li><strong>Model Explorer</strong> : opens the interface to download model from the Forum website.</li>\r\n<li><strong>Information</strong> : shows information about RAVE VST.</li>\r\n<li><strong>Latent Noise</strong> : injects some noise in the latent variables of incoming audio</li>\r\n<li><strong>Stereo Width</strong> : recreates stereo from mono models by randomizing some latents before decoding</li>\r\n<li><strong>Use Prior</strong> : if available, use prior to generated latents</li>\r\n<li><strong>Latent Bias</strong> : biases latents with a static value</li>\r\n<li><strong>Latent Scale</strong> : scales incoming trajectory by a static factor</li>\r\n<li><strong>Mute with Playback</strong> : cuts plug-in output when DAW is paused</li>\r\n<li><strong>Gain</strong> : input gain of incoming audio before model transformation.</li>\r\n<li><strong>Channel mode</strong> : if model is mono (all of them have so far), which channel to select for transformation : L, R, or (L+R)</li>\r\n<li><strong>Threshold</strong> : compression threshold of audio before transformation</li>\r\n<li><strong>Ratio</strong> : compression threshold of audio before transformation</li>\r\n<li><strong>Dry/Wet</strong> : mixes model's output with dry signal</li>\r\n<li><strong>Gain</strong> : overall output gain</li>\r\n<li><strong>Latency</strong> : which buffer size to use for model transformation. Low buffer size means little latency, but higher CPU overload.</li>\r\n<li><strong>Adaptive Latency</strong> : When set on, computes the processing time of the model to add it to overall latency. Toggling refreshes the latency computation.</li>\r\n</ol>\r\n<p>This may be a little complicated at a first glance, so let's make it work step by step. To make RAVE VST transform the sound, you first need to select a model in the <em>Model Selection</em> menu <strong>(1)</strong>. If this is the first time you installed RAVE VST, you should have no model available ; if so, you will first have to click on the <em>Model Explorer</em> <strong>(2)</strong> to access the Model Explorer panel (screenshot below) select a model in the list at left <strong>(18)</strong>, and then click the Download <strong>(20)</strong> button. You can also import a custom model using the <em>Import your custom model</em> button <strong>(19)</strong>. Then, go back to main interface by clicking <em>Play</em> <strong>(21)</strong>.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6a6a1611ee89906e637b05e511695bd3.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Well, that's it! Depending on the model you chose, the plug-in may be generate sound even if the track is empty ; this is because some models have not been trained on silence, so do not know how to reproduce it.</p>\r\n<h3>Adjusting input parameters</h3>\r\n<p><strong>Input dynamics.</strong> The panel with buttons <strong>(10) - (17)</strong>, that you can unfold by clicking the array on the very left of the plugin windows, is very important to calibrate how the effect will react to your sound, especially with dynamics. Indeed, by definition, RAVE is highly (<sub>very higly</sub>) non-linear, and then will have consequently different behaviour depending on input loudness. For this reason we added basic gain and compressors to allow you to deal with that directly in the plug-in interface. You can also select with channel to listen, as RAVE VST models are listening monophonic signals.</p>\r\n<p><strong>Buffer size.</strong> A very important parameter is the latency controls <strong>(16) &amp; (17)</strong>. RAVE models have an important latency, that cannot be reduced as the models need a certain amount of samples for transformation. You can adapt the buffer size with the <em>Latency Mode</em> menu <strong>(16)</strong>. Buffer sizes have a direct impact on CPU consumption : a slow buffer size will offer a reduced latency, at the cost of an higher CPU cost, while a big buffer size will offer an increased latency, but lower CPU cost.</p>\r\n<p><strong>Adaptive latency.</strong>This latency is declared by the plug-in to offer latency compensation within the DAW, but is difficult to evaluate exactly. The <em>Adaptive</em> toggle <strong>(17)</strong> allows to also encompass the processing time of the model, timing the delay between the input and the output of a model. However, this timing must be really biased, depending on your CPU load ; so, do not hesitate to deactivate it, or to trigger the timer again by de- and re-activating it.</p>\r\n<h2>Using RAVE as a synthesizer</h2>\r\n<p>Even if RAVE is an audio effect plug-in, it may still be used as synthesizer, even if it does not take MIDI notes as inputs. Instead, you can modulate the inner latent parameters of the RAVE model, to make them totally insensitive, or hyper-sensitive, to the input.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/81f8523ce1b9f13905fc4a4cf8bbe590.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><strong>Playing with latent parameters. </strong>The kind of star with frenetic points depict the position of the first 8 latent synthesis parameters that the model infer from the incoming audio (or the prior, see below). Here, the 1st dimension is selected : you can see this by the <em>Latent #1</em> labels under the two latent knobs at the bottom, and by the highlighted array on the circle. If you want to control another latent variable, click on the corresponding zone in the circle. The <em>Latent #N bias</em> <strong>(7)</strong> will add a constant value to the incoming latent value, while <em>Latent #N scale</em> <strong>(8)</strong> will multiply the incoming latent value by the given amount. <strong>Hence, by putting the scale to 0, you will be able to direcly control this latent synthesis parameter</strong>. Doing this to every dimension, you will use RAVE as a synthesizer ; et hop. The good thing though is that RAVE offers hybridation from a full-generator mode to a full audio-effect mode, so do not hesitate to explore all of these possibilities!</p>\r\n<p><strong>Latent Noise &amp; Stereo Width.</strong> The two knobs <strong>(4) &amp; (5)</strong> can be used for grouped latent parameters operations : <em>Latent Noise</em> will add noise to all the RAVE latent variables, performing some kind of latent glitch that will introduce more chaos in your model. The <em>Stereo Width</em> knob simulates a stereo output by randomizing some latent parameters (the one you do not access) between L &amp; R outputs (we remind that RAVE VST models are processing mono signals!).</p>\r\n<p><strong>Prior mode.</strong>&nbsp; Some models are embedding a trained <em>prior module</em>, that can be summarized as a latent parameter generator. If such prior is provided, you can enable the prior mode by clicking <strong>(6)</strong> : if you do so, the decoder will not use the latent variables of the incoming sound, but the ones generated by the prior module. Latent knobs <strong>(7) &amp; (8)</strong> are still effective, do not hesitate playing with it!</p>\r\n<p>Well, that's it! Do not hesitate to ask your questions in the <a href=\"https://discussion.forum.ircam.fr/c/rave-vst/651\">RAVE VST Forum</a>.</p>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -190px; top: -18.5px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 726,
                "name": "DAW",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 674,
                "name": "neural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20182,
            "forum_user": {
                "id": 20174,
                "user": 20182,
                "first_name": "Axel",
                "last_name": "Chemla-Romeu-Santos",
                "avatar": "https://forum.ircam.fr/media/avatars/Photo.jpg",
                "avatar_url": "/media/cache/f7/78/f778be374ea22ae4fcea1834f753924b.jpg",
                "biography": "Based in Paris, France, Axel Chemla—Romeu-Santos works a researcher, composer, and performer in various fields such as music, theater, and artificial intelligence. After a double undergraduate degree in Engineering Sciences & Music Theory, he specialized in acoustics and computer music at IRCAM. Always looking for creativity through technology, he initiated a PhD between IRCAM (Paris) and LIM (Milano) on the creative uses of generative artificial intelligence for sound synthesis. After graduation, he continued a research & creation approach to artificial intelligence, working both on scientific papers on AI creativity, and experimental musical pieces exploring diverse aspects of these technologies (such as network bending, real-time improvisation, and composition). \nBesides institutional works, he also work as musician and composer for the company Théâtre de la Suspension, is co-founder of the w.lfg.ng collective, member of the maximalist electronic music band Daim™, and has his personal project Kenoma.",
                "date_modified": "2025-10-21T19:56:31.408648+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 626,
                        "forum_user": 20174,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-18",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "chemla",
            "first_name": "Axel",
            "last_name": "Chemla-Romeu-Santos",
            "bookmarks": []
        },
        "slug": "tutorial-neural-synthesis-in-a-daw-with-rave",
        "pk": 2845,
        "published": true,
        "publish_date": "2024-03-20T12:36:56+01:00"
    },
    {
        "title": "Utiliser MaxISiS pour composer un choeur virtuel",
        "description": "Dans cet article sont décrites quelques opérations menées pour générer des voix avec ISiS depuis Max, qui m'ont permis de fabriquer un choeur virtuel. Pratique pour produire une simulation avant une répétition. La qualité permet également d'apprécier le résultat tel quel !",
        "content": "<p>Bonjour &agrave; tous,</p>\r\n<p>dans cet article, je d&eacute;cris un workflow qui m'a permis de composer pour un choeur, de r&eacute;aliser un pr&eacute;-rendu bien utile pour corriger des voicings et dont le r&eacute;sultat sonore a permis aux chanteurs d'entendre ce que je souhaitais. Pour &ecirc;tre honn&ecirc;te, le r&eacute;sultat &eacute;tait tellement au-dessus des mes attentes, que je ne me lasse pas de l'&eacute;couter.</p>\r\n<p>Pour cela, j'ai utilis&eacute; <a href=\"https://musescore.com/\">MuseScore</a> pour l'&eacute;dition gratuite de partition et l'export XML, puis <a href=\"/projects/detail/max-isis/\">MaxISiS</a> qui utilise lui-m&ecirc;me le moteur de synth&egrave;se de voix chant&eacute;e <a href=\"/projects/detail/isis/\">ISiS</a>.</p>\r\n<p>Tout d'abord, je suis parti du pr&eacute;lude opus 11 N&deg;9 de Scriabin, que j'ai accompagn&eacute; d'un po&egrave;me, pour l'arranger pour 3 voix d'hommes (basse, baryton, tenor), avec la voix de basse en lead.</p>\r\n<p><a href=\"https://www.youtube.com/watch?v=KSBfjfknqnc\">https://www.youtube.com/watch?v=KSBfjfknqnc</a></p>\r\n<p>J'ai fait l'arrangement dans MuseScore 3.0, mais j'aurais pu utilis&eacute; n'importe quel &eacute;diteur qui exporte en musicXML.</p>\r\n<p><img src=\"/media/uploads/user/64be59c0fd03c571ea8444f85b131e5e.png\" alt=\"\" width=\"1922\" height=\"2718\" /></p>\r\n<p>Ensuite, c'est la partie la plus hardue, j'ai tanscrit le texte fran&ccedil;ais en <a href=\"https://fr.wikipedia.org/wiki/X-SAMPA\">Xsampa</a>. En fait, ISiS utilise une petite partie seulement permettant la transcription du Fran&ccedil;ais (les trois voix disponibles d'ISiS sont fran&ccedil;aises, pour le moment):</p>\r\n<ul>\r\n<li>vowels: a, e, E, 2, 9, @, i, o, O, u, y, o~, a~, e~, 9~</li>\r\n<li>semi vowels: w, j, H</li>\r\n<li>voiced fricatives: v, z, Z</li>\r\n<li>unvoiced fricatives: f, s, S</li>\r\n<li>voiced plosives: b, d, g</li>\r\n<li>unvoiced plosives: p, t, k</li>\r\n<li>nasals: m, n, N</li>\r\n<li>liquids: R, l</li>\r\n</ul>\r\n<p>ISiS a besoin d'une et une seule voyelle par note. Donc si une voyelle est tenue sur une liaison, alors il faut la recopier.</p>\r\n<pre>si je coupe la lune en deux si je coupe la lune en deux je t'off ri rai le croissant le plus lu mi neux si je coupe la lune en deux il ne me res te que le ver sant plus sombre le des tin d'une ombre l'a mer tume d'a voir tout off ert un soir un mar tyre un sans rire homme a mer un mys t&egrave;re de la nuit si je coupe la lune en deux j'a tten drai qu'elle soit plei ne</pre>\r\n<p>La transcription donne:</p>\r\n<pre>si Z9 kup la ly na~ d2 si Z9 kup la ly na~ d2 Z9 tO fRi RE l@ kRwa sa~ l@ ply ly mi n2 2 si Z9 kup la ly na~ d2 il n2 m2 REs t2 k2 l@ vER sa~ ply so~bR l@ dEs t9~ dy no~bR la mER tym da vwaR tu to fER 9~ swaR 9~ maR ti iR 9~ sa~ Ri iR Om a mE E Er 9~ mis tE ER d2 la ny i i i i si Z9 kup la ly na~ d2 2 Za ta~ dRE kE l9 swa plE n2 </pre>\r\n<p>Soit sur la partition:</p>\r\n<p><img src=\"/media/uploads/user/63597db96610b1bd9a799e5032ae17a2.png\" alt=\"\" width=\"1922\" height=\"2718\" /></p>\r\n<p>Une fois la partition pr&ecirc;te, j'exporte les parties s&eacute;par&eacute;es en musicXML. Attention, par d&eacute;faut, MuseScore utilise l'extension .musicxml alors que MaxISiS n'accepte que les fichiers .xml. Il me faut donc raccourcir l'extension des fichiers &agrave; la main ou avec un petit applescript.</p>\r\n<p>Dans <a href=\"/projects/detail/max-isis/\">MaxISiS</a> ou bien dans <a href=\"/projects/releases/max-isis/\">ISiS4Live</a>, j'importe chaque partie les unes apr&egrave;s les autres et je les synth&eacute;tise avec des voix diff&eacute;rentes:</p>\r\n<ul>\r\n<li>Basse: RT: male tenor pop singer. (mean midi pitch: 50 - D3)</li>\r\n<li>Baryton: MS: female mezzo-soprano pop singer. (mean midi pitch: 65 - F4)</li>\r\n<li>Tenor: EL: female soprano lyrical singer. (mean midi pitch: 69 - A4)</li>\r\n</ul>\r\n<p>Pour chaque voix, je synth&eacute;tise avec les cinq styles diff&eacute;rents. De cette fa&ccedil;on, les cinq voix sont un peu diff&eacute;rentes et constituent ensemble un effet de pup&icirc;tre:</p>\r\n<ul>\r\n<li>None</li>\r\n<li>eP: Edith Piaf</li>\r\n<li>jG: Juliette Greco</li>\r\n<li>fL: Fran&ccedil;oise Leroux</li>\r\n<li>sD: Sasha Distel</li>\r\n</ul>\r\n<p>Enfin, je mix tout &ccedil;a dans le spat et &ccedil;a donne:</p>\r\n<p>&nbsp;</p>\r\n<p><iframe width=\"1922\" height=\"1024\" src=\"https://www.youtube.com/embed/qHG_u3WMkpI\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p>Voil&agrave;. En conclusion, je dirais que ce qui m'a pris le plus de temps est la transcription en Xsampa. Mais que le jeu en vaut la chandelle.&nbsp;<a href=\"/projects/detail/max-isis/\">MaxISiS</a> donne envie d'&eacute;crire pour voix de synth&egrave;se !</p>",
        "topics": [
            {
                "id": 382,
                "name": "Choir",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 26,
                "name": "Isis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 381,
                "name": "Maxisis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 23,
                "name": "Singing synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 614,
                "name": "Traitement vocal",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "utiliser-maxisis-pour-composer-un-choeur",
        "pk": 610,
        "published": true,
        "publish_date": "2020-03-31T14:45:07+02:00"
    },
    {
        "title": "Le système intégral des bourdons harmoniques (une nouvelle approche pour comprendre l'intonation de la musique persane) - Vahid Hosseini",
        "description": "Dans cette conférence, le conférencier, qui connaît bien les traditions classiques persanes et européennes, explore le monde complexe de la musique radif persane. En utilisant la technologie pour calculer précisément les intervalles, il interprète les systèmes d'intervalles complexes qui façonnent le concept poly-modal de DASTGAH. En mettant l'accent sur les principaux instruments classiques persans, en particulier ceux à quatre cordes accordés par quartes ou quintes parfaites, il révèle leur rôle dans l'élaboration d'intervalles microtonaux pour une tension musicale dramatique. Remettant en cause les théories établies, il propose une nouvelle perspective ancrée dans les séries harmoniques du système intégral des bourdons harmoniques. Il redéfinit non seulement le système d'intonation Radif, mais suggère également des implications pour le Maqamat dans la musique arabe et turque, inspirant potentiellement de nouveaux modes de création musicale.",
        "content": "<p><a href=\"/media/uploads/bandeaux_articles.png\"></a></p>\r\n<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Vahid&nbsp;Hosseini&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/ehva84/\">Biographie&nbsp;</a></p>\r\n<p>L'utilisation de la technologie actuelle pour le calcul pr&eacute;cis des intervalles musicaux, ainsi que le potentiel de traitement de ces donn&eacute;es gr&acirc;ce &agrave; des outils tels qu'OpenMusic, ont ouvert de nouvelles voies analytiques pour l'interpr&eacute;tation des ph&eacute;nom&egrave;nes culturels musicaux. L'auteur, qui conna&icirc;t bien les traditions classiques persanes et europ&eacute;ennes, adopte une approche compositionnelle pour interpr&eacute;ter les syst&egrave;mes d'intervalles complexes formant le concept poly-modal de DASTGAH dans la musique persane Radif. La recherche prend en compte le r&ocirc;le int&eacute;gral jou&eacute; par les principaux instruments classiques persans et leurs syst&egrave;mes d'accordage dans l'&eacute;laboration du Radif en tant que syst&egrave;me de composition modulaire. Les instruments typiquement &agrave; quatre cordes, principalement accord&eacute;s par quartes ou quintes parfaites, &eacute;tablissent un syst&egrave;me d'accordage crucial pour la cr&eacute;ation d'intervalles microtonaux complexes, contribuant &agrave; la tension dramatique au sein de la construction musicale. En utilisant ces intervalles, l'improvisateur navigue dans les diverses possibilit&eacute;s offertes par les bourdons, cr&eacute;ant un r&eacute;cit dynamique par la modulation, l'accumulation de tension, l'apog&eacute;e, le rel&acirc;chement et les changements soudains de discours. L'auteur affirme qu'en d&eacute;pit des diff&eacute;rences de mesures d'intervalles entre les ma&icirc;tres historiques et contemporains de la musique persane (J. During 2006), la s&eacute;rie harmonique produite par l'ensemble de bourdons (limite-13) sert de source originale d'intervalles dans le syst&egrave;me d'intonation du Radif.&nbsp;Cela contredit les th&eacute;ories ant&eacute;rieures bas&eacute;es sur des divisions &eacute;gales, par exemple 24 edo (Vaziri 1934), (Touma 1996), et des rapports tels que 11:10 (Farhat 1990), qui sont incongrus avec l'accord r&eacute;el du bourdon. En s'appuyant sur les concepts fondamentaux de l'intonation juste et des intervalles purs, la recherche propose que l'intervalle neutre, &eacute;galement connu sous le nom de koron, puisse &ecirc;tre attribu&eacute; &agrave; l'intervalle 13:12, presque pr&eacute;cis&eacute;ment situ&eacute; entre les &eacute;carts de tierce mineure et majeure par rapport &agrave; 12-edo (approximativement +16 et +86). Bien que la d&eacute;termination pr&eacute;cise de ces intervalles ne corresponde pas enti&egrave;rement &agrave; la croyance de l'auteur dans la complexit&eacute; inh&eacute;rente de l'intonation, le syst&egrave;me int&eacute;gral des bourdons harmoniques offre une nouvelle perspective sur la construction musicale de la musique persane et peut-&ecirc;tre sur celle du Maqamat dans la musique arabe et turque. Cela peut conduire &agrave; une meilleure compr&eacute;hension de ce ph&eacute;nom&egrave;ne et potentiellement inspirer de nouveaux modes de cr&eacute;ation musicale, d&eacute;montr&eacute;s par les propres compositions de l'auteur.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1736,
                "name": "intonation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 358,
                "name": "Microtonal-music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1737,
                "name": "persian music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1738,
                "name": "radif",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 26694,
            "forum_user": {
                "id": 26667,
                "user": 26694,
                "first_name": "Vahid",
                "last_name": "Hosseini",
                "avatar": "https://forum.ircam.fr/media/avatars/photo_vahid.jpg",
                "avatar_url": "/media/cache/fd/44/fd44f4bcb37ed8131c48d0bfbe96b2ab.jpg",
                "biography": "Vahid Hosseini (1984 Tehran) is a composer and performer. He has studied composition with Salvatore Sciarrino, Marco Stroppa, Gabriele Manca, Paolo Aralla, Tristan Murail (masterclass) Alessandro Solbiati, and Veli-Matti Puumala, at Bologna conservatory - graduating with top marks cum laude - Sibelius Academy Helsinki, Chigiana Academy Siena, Verdi conservatory Milan, and HMDK Stuttgart. Earlier he studied the setar and Radif of Persian music with Massoud Shaari and Hossein Alizadeh. \n\nHis compositions have been praised as “outsiders to the dilemma of the unavoidable mimetic nostalgias of the present time”*, stemming from a “sense of clarity that proposes new solutions on how to survive a ground zero.” His music has been played by notable ensembles and performers like Mdi ensemble Milan, Fontanamix Bologna, Zagros ensemble Helsinki, Nicola Baroni, Paolo Ravaglia, David Nunez etc. \nHe has been awarded Premio Magone and Alberghini (Bologna), and third composition prize in Premio di Conservatorio Verdi, Milan.",
                "date_modified": "2025-12-10T15:35:50.484521+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ehva84",
            "first_name": "Vahid",
            "last_name": "Hosseini",
            "bookmarks": []
        },
        "slug": "the-integral-system-of-harmonic-drones-a-new-approach-to-understanding-persian-music-intonation",
        "pk": 2718,
        "published": true,
        "publish_date": "2024-02-13T09:52:25+01:00"
    },
    {
        "title": "Sound Delta — A Compositional Approach to 6DoF Audio Environments by Zak Cammoun",
        "description": "This abstract outlines a presentation for the IRCAM FORUM 2026 regarding the evolution of Sound Delta, a 6DoF ecosystem where the listener’s body serves as the playback head.",
        "content": "<p><span><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></span></p>\r\n<p><span>What happens when the \"sweet spot\" of spatial audio isn't a fixed seat in a studio, but a 500-square-meter physical landscape?</span></p>\r\n<p><span>This presentation shares the journey of </span><strong>Sound Delta</strong><span>, a 6DoF (Six Degrees of Freedom) spatial audio system designed to bridge the gap between architectural space and musical composition. By moving the processing from the studio to a mobile kit, </span><strong>Sound Delta</strong><span> allows for the emergence of the </span><strong>\"Promeneur &Eacute;coutant\"</strong><span>&mdash;a listener whose quiet walk through a space becomes an active, intimate exploration of a sonic environment.</span></p>\r\n<p><span>We will break down an ecosystem built in </span><strong>Max/MSP</strong><span> that brings the composer&rsquo;s gesture directly into the field. This is a fragile equilibrium: using the </span><strong>Sound Delta Mobile Interaction Unit</strong><span>, the act of composing becomes a physical one, where sounds are organized in layers and \"sculpted\" into the room&rsquo;s air. It is a process of defining boundaries&mdash;halos, cuts, and fades&mdash;that only exist when someone is there to hear them.</span></p>\r\n<p><span>Beyond the technical architecture, we will discuss the practical reality of protecting this experience. From the development of a </span><strong>\"Spatial Solf&egrave;ge\"</strong><span> to the creation of a simple diagnostic \"heartbeat\" for our fleet, we show how we managed to keep this system alive and stable for weeks of autonomous use in museums and festivals.</span></p>\r\n<p><strong>Sound Delta is a working environment where space and sound meet&mdash;a project that remains as delicate as it is functional.</strong></p>\r\n<p><strong><img alt=\"CARGO (c) Fran&ccedil;ois FLEURY\" src=\"https://forum.ircam.fr/media/uploads/user/f2ecd5ad451d7c488a23aa04206ffae8.jpg\" /></strong></p>",
        "topics": [],
        "user": {
            "pk": 28923,
            "forum_user": {
                "id": 28895,
                "user": 28923,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Photo_CAMMOUN_Zakariyya_3.jpg",
                "avatar_url": "/media/cache/98/73/98734ec37a19f08c95b0f000e167cebc.jpg",
                "biography": "Zakariyya Cammoun is a technical manager and sound engineer specializing in immersive and spatial audio systems. With a background in computer engineering and professional audio, he develops software-hardware solutions for installations, performances, and interactive sound environments.\n\nHis work places audience perception at the center of system design, with a focus on spatial listening and how sound shapes experience across contexts. He is involved in live spatial audio concerts and location-based projects exploring relationships between movement, environment, and listening.\n\nHe is the designer of the Sound Delta engine, a location-aware platform used as both a compositional tool and immersive playback device, enabling dynamic spatialized sound in public spaces.\n\nAs a technical manager, he leads system architecture, spatial audio design, and real-time integration with artists and production teams. His work spans galleries, festivals, concert venues, and urban settings, including collaborations with Niyaz, Kasper T. Toeplitz, and Collectif MU, alongside ongoing research into perceptual approaches to immersive sound.",
                "date_modified": "2026-03-14T02:59:25.393888+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "zak",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sound-delta-a-compositional-approach-to-6dof-audio-environments-by-zak-cammoun",
        "pk": 4509,
        "published": true,
        "publish_date": "2026-03-14T03:05:43+01:00"
    },
    {
        "title": "Drizzle Path ——Ouvrage audiovisuel interactif - Yixuan Zhao",
        "description": "Drizzle Path est un paysage mystérieux plein d'imagination, un voyage merveilleux contenant l'esprit d'exploration, et un renouveau sinueux de la mémoire. \r\n\r\nprésenté par Zhao Yixuan, Yang Ruochen",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><br />Pr&eacute;sent&eacute;&nbsp;par :&nbsp;Zhao Yixuan, Yang Ruochen<br /><a href=\"https://forum.ircam.fr/profile/toro/\">Biographie</a></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9384cce3c4148786e5615d03a7902841.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"3708\" height=\"2079\" /></p>\r\n<p>J'explore la mani&egrave;re dont les interpr&egrave;tes classiques peuvent engager et influencer les sons &eacute;lectroniques et les images visuelles. Le centre de ce travail est l'interpr&egrave;te, dont l'expression musicale relie &eacute;troitement le son et la vision. J'explore comment les interpr&egrave;tes classiques peuvent engager et influencer les sons &eacute;lectroniques et les images visuelles. Le centre de ce travail est l'interpr&egrave;te, dont l'expression musicale relie &eacute;troitement le son et la vision. Nous remercions tout particuli&egrave;rement l'artiste visuel --Qiao Zhi pour avoir con&ccedil;u et arrang&eacute; les &eacute;l&eacute;ments visuels de cette &oelig;uvre.</p>\r\n<p>Le piano est une manette qui contr&ocirc;le &agrave; la fois le son et la vision. L'expression musicale du pianiste est une variable en temps r&eacute;el dans le syst&egrave;me audiovisuel. Qu'il s'agisse de remodeler de nouveaux timbres &agrave; l'aide d'algorithmes al&eacute;atoires, de construire un espace tridimensionnel ou d'utiliser une vid&eacute;o g&eacute;n&eacute;r&eacute;e et &eacute;volu&eacute;e par l'IA pour pr&eacute;senter la transformation en cascade des champs, la double interaction de l'audition et de la vision donne lieu &agrave; de multiples interpr&eacute;tations, explique le pouvoir expressif inh&eacute;rent de l'audiovisuel donn&eacute; par l'IA et nous incite &agrave; r&eacute;fl&eacute;chir &agrave; la symbiose entre l'homme et l'IA.</p>\r\n<p>L'&oelig;uvre d&eacute;tecte les param&egrave;tres de performance en temps r&eacute;el, y compris le pic d'attaque, les harmoniques, le tempo, l'intensit&eacute; sonore, etc., puis d&eacute;clenche des sons &eacute;lectroniques ou des effets sonores et utilise des algorithmes al&eacute;atoires pour synth&eacute;tiser de nouveaux sons. Les param&egrave;tres de performance contr&ocirc;lent &eacute;galement la partie visuelle. En plus du \"d&eacute;clenchement\", la deuxi&egrave;me partie de l'&oelig;uvre utilise le tempo de la performance pour affecter la transformation continue de l'image de l'IA. Elle utilise principalement le mod&egrave;le \"realistic_vision\" dans \"Stable Diffusion\" et calcule la s&eacute;quence d'images par le biais du composant fonctionnel \"Deform\". Le travail comprend une version st&eacute;r&eacute;o et une version Atmos.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1812,
                "name": "art performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1820,
                "name": "interactive live electronics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1827,
                "name": "machine listening",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 62797,
            "forum_user": {
                "id": 62730,
                "user": 62797,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/0346.jpeg",
                "avatar_url": "/media/cache/81/c6/81c60f76699192d11694304bc383afc7.jpg",
                "biography": "Zhao Yixuan is a composer, member of Electroacoustic Music Association of China (EMAC), a post-doctoral researcher at Central Conservatory of Music, China, a visiting researcher at Royal Birmingham Conservatoire, UK.\n\nShe has been dedicated to exploring the practice of digital audio and artificial intelligence in music composition, and collaborating with performers to search more possibilities in technological performance environments. Her composition, which mainly focuses on electroacoustic music, interactive music and contemporary music, her works have won numerous prizes and performed in many international conferences and concerts, including Journées d'Informatique Musicale, NIME, China-UK International Music Festival, Nottingham New Music Festival, MUSICACOUSTICA-BEIJING, SOMI, Beijing Youth Arts Festival, WOCMAT Taiwan, etc.\n\ncontact:zyixuan1111@gmail.com",
                "date_modified": "2025-05-05T12:55:33.601680+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "toro",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "drizzle-path-interactive-audiovisual-work-1",
        "pk": 2760,
        "published": true,
        "publish_date": "2024-02-20T06:47:26+01:00"
    },
    {
        "title": "Toward better patch sustainability : techniques to ensure music software compatibility over time",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;No need to say here and now that making music using a computer program is a common practice. As common as making music with computers is the experience of not being able to re-play a piece after a certain amount of time &mdash; and even not a so big amount of time &hellip;\\nDuring this talk, I will try to give advices and recommandations for computer music programmers that want to make their patches a little bit more resistant to the passing of time, the successions of computer generations, software versions, and to programmed obsolescence. I will show some examples of preservation practices and prospective views on long term computer music preservation projects.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:893,&quot;3&quot;:{&quot;1&quot;:0},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0}\">No need to say here and now that making music using a computer program is a common practice. As common as making music with computers is the experience of not being able to re-play a piece after a certain amount of time &mdash; and even not a so big amount of time &hellip;</span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;No need to say here and now that making music using a computer program is a common practice. As common as making music with computers is the experience of not being able to re-play a piece after a certain amount of time &mdash; and even not a so big amount of time &hellip;\\nDuring this talk, I will try to give advices and recommandations for computer music programmers that want to make their patches a little bit more resistant to the passing of time, the successions of computer generations, software versions, and to programmed obsolescence. I will show some examples of preservation practices and prospective views on long term computer music preservation projects.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:893,&quot;3&quot;:{&quot;1&quot;:0},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0}\"><br />During this talk, I will try to give advices and recommandations for computer music programmers that want to make their patches a little bit more resistant to the passing of time, the successions of computer generations, software versions, and to programmed obsolescence. I will show some examples of preservation practices and prospective views on long term computer music preservation projects.</span></p>",
        "topics": [],
        "user": {
            "pk": 76,
            "forum_user": {
                "id": 76,
                "user": 76,
                "first_name": "Serge",
                "last_name": "Lemouton",
                "avatar": "https://forum.ircam.fr/media/avatars/deborah.jpg",
                "avatar_url": "/media/cache/eb/52/eb52181309dccd2a20b1dc1b54ef0f67.jpg",
                "biography": "Serge Lemouton\n\nréalisateur en informatique musicale Ircam\n\nAprès des études de violon, de musicologie, d'écriture et de composition, Serge Lemouton se spécialise dans les différents domaines de l'informatique musicale au département Sonvs du Conservatoire national supérieur de musique de Lyon. Depuis 1992, il est réalisateur en informatique musicale à l'Ircam. Il collabore avec les chercheurs au développement d'outils informatiques et participe à la réalisation des projets musicaux de compositeurs parmi lesquels Florence Baschet, Laurent Cuniot, Michael Jarrell, Jacques Lenot, Jean-Luc Hervé, Michaël Levinas, Magnus Lindberg, Tristan Murail, Marco Stroppa, Fréderic Durieux et autres. Il a notamment assuré la réalisation et l’interprétation en temps réel de plusieurs œuvres de Philippe Manoury, dont K…, la frontière, On-Iron, Partita 1 et 2, et l’opéra Quartett de Luca Francesconi.\n\nActuellement, il s’intéresse plus particulièrement à la transmission et la préservation des œuvres du répertoire de l’informatique musicale.",
                "date_modified": "2026-02-27T09:18:37.644467+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 25,
                        "forum_user": 76,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [
                            {
                                "id": 276,
                                "membership": 25
                            },
                            {
                                "id": 563,
                                "membership": 25
                            },
                            {
                                "id": 751,
                                "membership": 25
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "lemouton",
            "first_name": "Serge",
            "last_name": "Lemouton",
            "bookmarks": []
        },
        "slug": "toward-better-patch-sustainability-techniques-to-ensure-music-software-compatibility-over-time",
        "pk": 1340,
        "published": true,
        "publish_date": "2022-09-13T16:48:26+02:00"
    },
    {
        "title": "Merzmania by Gintas Kraptavicius",
        "description": "Merzmania est une pièce qui relie la musique classique à la musique bruitiste faite de sons synthétiques. J'utilise un ordinateur, le logiciel Plogue Bidule et un contrôleur midi/clavier assigné aux paramètres des plugins VST. Tous les paramètres du logiciel sont contrôlés/modifiés en temps réel pendant la performance à l'aide des boutons et des curseurs du contrôleur midi.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img src=\"/media/uploads/user/download.png\" alt=\"\" width=\"943\" height=\"493\" /></p>\r\n<p>Presented by : Gintas Kraptavicius</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/gintas/\" target=\"_blank\">Biography</a></p>\r\n<p>Merzmania it is piece connecting classical music with noise music made from synthesized sounds. I am using a computer, Plogue Bidule software &amp; midi controller/keyboard assigned to VST plugins parameters. All software parameters controlled/altered live in a real time during performance using knobs &amp; sliders of midi controller.</p>\r\n<p><span>Supported by Lithuanian Culture Institute.</span></p>\r\n<p></p>",
        "topics": [
            {
                "id": 2508,
                "name": "Electroacoustic breakcore",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1808,
                "name": "granular synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1862,
                "name": "noise",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 9009,
            "forum_user": {
                "id": 9006,
                "user": 9009,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/8b0d897cc5d56c0935c5e0d8c88d570e?s=120&d=retro",
                "biography": "Gintas K (real name Gintas Kraptavicius) is a sound artist born in Lithuania.\nGintas K exploring experimental, electroacoustic, electronic, computer music, granular synthesis, live electronic music aesthetics. Since 2011 member of Lithuanian Composers Union. Till now he has released 51 albums, took part in various international festivals, conferences, symposiums as transmediale.05 : Basics, transmediale.07 : Unfinish!, ISEA 2015: Disruption, IRCAM Forum Workshops 2017, ICMC2018, ICMC2022, ICMC-NYCEMF 2019 , NYCEMF 2020 , Ars Electronica Festival 2020 , Ars Electronica Festival 2023 , NYCEMF 2021 , NYCEMF 2022 , NYCEMF 2023 NYCEMF 2024 , Ars Electronica Festival 2024 . Winner of the II International Sound-Art Contest Broadcasting Art 2010, Spain. Winner of the II International Sound-Art Contest Broadcasting Art 2010, Spain. Winner of The University of South Florida New-Music Consortium 2019 International Call for Scores in electronic composition category.",
                "date_modified": "2025-01-31T17:20:12.177410+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "gintas",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "merzmania",
        "pk": 3215,
        "published": true,
        "publish_date": "2025-01-15T14:30:56+01:00"
    },
    {
        "title": "Coherent Multimodal Instrument Design – Case of Granular Synthesizer by Myungin Lee",
        "description": "This talk discusses the properties of coherent multimodal instruments based on numerical crossmodal observation. Especially by introducing the design process of a multimodal granular synthesizer, this presentation aims to contribute to reorganizing the design process of multimodal instruments beyond the old and recent customs.",
        "content": "<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/94a8d7f1b60057b7a276c936cf107c9c.jpeg\" /></p>\r\n<p>Digital medium provides great freedom for new artistic expressions with advanced audio, graphics, interface, and algorithms, including machine learning. However, while our nature is multimodal, these modalities in the digital domain are genuinely separate, and the computational platform allows innumerable varieties of linkages among them. For this reason, the holistic multimodal experience is highly dependent on the design and connection of different modalities.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Crossmodal Correspondence &amp; Coherence</strong></p>\r\n<p>In our multimodal experience, the signals sensed through one sensory modality can influence the processing of information received through another. Such a phenomenon is crossmodal interaction. To achieve a \"natural\" mapping of features, or dimensions, of experience across sensory modalities, we genuinely utilize crossmodal interaction to find crossmodal correspondence among different modalities. This natural mapping requires systematic, consistent, logical connections bridging \"coherent\" multimodal experience. Here is an example of crossmodal correspondence, takete&ndash;maluma effect.</p>\r\n<p>In this experiment by Wolfgang K&ouml;hler in 1929, the participants match the words \"takete\" and \"maluma\" with the two figures above. About 97 % of participants relate the word \"takete\" better with the left shape, whereas the word \"maluma\" connects with the right shape.&nbsp;</p>\r\n<p>Let us extract a few features representing the characteristics of two figures and words to expand this experiment into numerical analysis. The figure below shows the features extracted: geometric observation, waveform from text-to-speech, audio spectrogram, and pitch estimation.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a20995e2cdbcfbecd0843e2a62fc557f.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Interface</strong></p>\r\n<p>The instrument's interface is the membrane of interaction between humans and technology. Especially designing an interface for the digital instrument requires a substantial effort to achieve coherent crossmodal interaction. While the characteristics of the acoustic instrument are determined by its physicality including structure, resonance, texture, and space, digital instruments with computational platforms are inherently non-physical. The digital instrument designer can separate sound production from the means. This circumstance gives excellent freedom to instrument design. At the same time, designing the interface to interact with the sound material that is now separated from the physicality is challenging. Nevertheless, our bodies and movements are the most expressive tools that humans can have.</p>\r\n<p>&nbsp;</p>\r\n<p>To assess the characteristics of coherent instruments, this study proposes a model that interprets the experience of the music, instrumentalist, instrument, and audience as a function.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/643fa688ffaa9a76618e0f12f09a1dd2.jpg\" /></p>\r\n<p>This model derives two properties of coherent instruments: time invariance and perceptible interface. Time invariance means the input and output characteristics of a system do not change with time. The second property states the audience can identify and relate the gestural event to the sound. These properties are necessary for our brain to synthesize information from systematic, consistent, and logical cross-modal stimuli through multisensory integration.</p>\r\n<p>These observations bring underlying questions:<br /><br /><em>&nbsp; How do we decide which multimodal experience is coherent?</em></p>\r\n<p><em>&nbsp; What is the &ldquo;natural&rdquo; mapping of multimodal features?</em></p>\r\n<p><em>&nbsp; Is the user study the only method to evaluate the experience?&nbsp;</em></p>\r\n<p><em>&nbsp; Is there a way to numerically analyze the level of multimodal coherence?</em></p>\r\n<p><em>&nbsp; What opportunities do the multimodal experience allow compared with the monomodal experience?</em></p>\r\n<p>&nbsp;</p>\r\n<p>This study discusses the key elements of multimodal instrument design and the potential benefits through the case studies. The design process of the multimodal granular synthesizer, AlloThresher, is introduced.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>- AlloThresher: Multimodal Granular Synthesizer</p>\r\n<p>&lt;iframe src=\"https://www.youtube.com/embed/nSmKgdPC1ek?feature=shared\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\"&gt;&lt;/iframe&gt;</p>\r\n<p>AlloThresher is a multimodal instrument based on granular synthesis using the gestural interface. Granular synthesis is a sound synthesis method that creates complex tones by combining and mixing the simple micro-sonic elements called grains. Using two smartphones with gyroscopes and accelerometers in both hands, the user can precisely and spontaneously trigger the parameters of the granular synthesis in real-time. The devised gestural interface includes an adaptive filter and reverberation, adding expressiveness. The modulated spectrogram of each grain and post-processing generate the corresponding visuals, morphing and blending dynamically with the instrumentalist's performance. The entire software is programmed in C++, optimizing the real-time multimodality. By removing conventional interfaces like knobs and sliders, this seamless connection between modalities utilizes the profound advantage of the gestural interface. The instrumentalist's physical presence and gesture become part of the space and the performance so that the audience can simultaneously observe and cohesively connect the audio, visual, and interface.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>Additionally, this study suggests a way to use correlation coefficients between modalities to assess crossmodal correspondence.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Related Articles</strong></p>\r\n<p>- Myungin Lee, &ldquo;Coherent Digital Multimodal Instrument Design and the Evaluation of Crossmodal Correspondence,\" Ph.D. dissertation, August., 2023. <a href=\"https://escholarship.org/uc/item/3gb4h770\">https://escholarship.org/uc/item/3gb4h770</a></p>\r\n<p>- Myungin Lee, Jongwoo Yim, \"AlloThresher: Multimodal Granular Synthesizer,\" International Computer Music Conference (ICMC), June 2024. <a href=\"https://www.researchgate.net/publication/382330556_AlloThresher_Multimodal_Granular_Synthesizer\">https://www.researchgate.net/publication/382330556_AlloThresher_Multimodal_Granular_Synthesizer</a></p>",
        "topics": [
            {
                "id": 2347,
                "name": "crossmodal",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1808,
                "name": "granular synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2346,
                "name": "instrument design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 206,
                "name": "Interactive real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1358,
                "name": "multimodal",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 86584,
            "forum_user": {
                "id": 86481,
                "user": 86584,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/678a1ba6ba569b215b8c6e157d6be926?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-11-07T02:00:03.117690+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 986,
                        "forum_user": 86481,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "mlee",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3074,
                    "user": 86584,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "coherent-multimodal-instrument-design-case-of-granular-synthesizer",
        "pk": 3074,
        "published": true,
        "publish_date": "2024-10-25T02:26:34+02:00"
    },
    {
        "title": "Le Temps Des Cloches par nadir Babouri",
        "description": "« Le Temps des Cloches » est une installation sonore qui met en valeur les mélodies cristallines du carillon de l'église Saint-Brice de la ville de Tournai, en Belgique. L'installation utilise un système embarqué. Le patch Max utilisé pour déclencher la projection sonore est compilé à l'aide de la bibliothèque RNBO, puis exporté directement vers un Raspberry Pi. La projection des mélodies est programmée et synchronisée avec les heures exactes de la journée. \r\n« Le Temps Des Cloches est un hommage à Jean Lochard.",
        "content": "<p><span></span></p>\r\n<p>Le Temps Des Cloches est une installation sonore dont la mise en &oelig;uvre consiste &agrave; projeter des airs enregistr&eacute;s du &laquo; carillon &raquo; de l'&eacute;glise Saint-Brice dans la ville de Tournai, en Belgique. La projection sonore des airs est programm&eacute;e et synchronis&eacute;e avec les heures exactes de la journ&eacute;e, de sorte que le site r&eacute;sonne en m&ecirc;me temps que son environnement. Vingt-cinq m&eacute;lodies programm&eacute;es ont &eacute;t&eacute; enregistr&eacute;es de huit heures du matin &agrave; huit heures du soir. Une musique toutes les demi-heures. L'installation utilise une solution de syst&egrave;me embarqu&eacute;. Le patch Max de l'installation sonore est compil&eacute; &agrave; l'aide de la biblioth&egrave;que RNBO, puis export&eacute; directement vers le Raspberry Pi.</p>\r\n<p>Le Temps Des Cloches a re&ccedil;u une subvention de la F&eacute;d&eacute;ration Wallonie Bruxelles dans le cadre du Parcours d'Enseignement Culturel et Artistique 2023-2025.</p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/045a8074f543976beeaa6fd3e48d3da3.jpg\" width=\"972\" height=\"590\" /></span></p>\r\n<p><span>Credits :</span><span>\u2028<br /></span><span>Conception, programmation &amp; installation : nadir B.</span><span>\u2028 </span></p>\r\n<p><span>Field recording : nadir B., Thierry Ottevaere<br /></span></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 2485,
                "name": "carillon",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2486,
                "name": "chimes",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2488,
                "name": "embedded system",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2487,
                "name": "heritage",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2350,
                "name": "Raspberry Pi",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2349,
                "name": "RNBO",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 26,
            "forum_user": {
                "id": 26,
                "user": 26,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Acousmatic_Miniature_1.jpg",
                "avatar_url": "/media/cache/e4/a3/e4a33a726757791da7c0210ad665a60f.jpg",
                "biography": "Membre actif du Forum Ircam et utilisateur des logiciels de l’institut où il a été formé par Alexis Baskind (Spatialisateur), Jean\nLochard (Audiosculpt), Mikhail Malt (Open Music) , Benjamin Thigpen (Max), Nicolas Misdariis (Sound Design).\nNadir Babouri is an active member of IRCAM Forum and a user of IRCAM's softwares. He studied with Alexis Baskind (Spatialisateur), Jean Lochard (Audiosculpt), Mikhail Malt (Open Music), Benjamin Thigpen (MaxMsp), Nicolas Misdariis (Sound Design) and Jean-Louis Giavitto (Antescofo Language)",
                "date_modified": "2025-04-15T10:27:19.119235+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nadir-b",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "le-temps-des-cloches-by-nadir-b",
        "pk": 3183,
        "published": true,
        "publish_date": "2024-12-25T15:45:55+01:00"
    },
    {
        "title": "Somax 2.6 et les outils de co-création REACH - Marco Fiorini, Mikhail Malt",
        "description": "Présentation de la dernière version de Somax 2.6 et des outils de co-création du projet REACH en cours de développement, lors des Ateliers du Forum IRCAM 2024 à Paris, par Marco Fiorini et Mikhail Malt.",
        "content": "<div style=\"text-align: center;\"><a href=\"https://forum.ircam.fr/agenda/save-the-date-ateliers-du-forum-2024-edition-des-30-ans/detail/\"><img src=\"/media/uploads/bandeaux_articles.png\" height=\"330\" width=\"990\" /></a></div>\r\n<div style=\"text-align: center;\"></div>\r\n<div style=\"text-align: center;\"></div>\r\n<div style=\"text-align: center;\"></div>\r\n<div>Pr&eacute;sent&eacute; par :&nbsp;Marco Fiorini et Mikhail Malt</div>\r\n<div><a href=\"https://forum.ircam.fr/profile/fiorini/\">Biographie</a> &nbsp;<br />-</div>\r\n<div>Somax 2.6 est une application et une biblioth&egrave;que pour l'interaction co-cr&eacute;ative en direct avec des musiciens dans des sc&eacute;narios d'improvisation, de composition ou d'installation. <br />Il est bas&eacute; sur une machine d'&eacute;coute, un moteur r&eacute;actif et un mod&egrave;le g&eacute;n&eacute;ratif qui fournissent une improvisation stylistiquement coh&eacute;rente tout en s'adaptant continuellement au contexte musical externe audio ou midi. Il utilise un mod&egrave;le de m&eacute;moire cognitive bas&eacute; sur des corpus musicaux qu'il analyse et apprend comme bases stylistiques, en utilisant un processus similaire &agrave; la synth&egrave;se concat&eacute;native pour rendre le r&eacute;sultat, et il s'appuie sur un espace de repr&eacute;sentation des connaissances harmoniques et texturales appris globalement en utilisant des techniques d'apprentissage automatique.<br /> -<br />Somax2 est un descendant du c&eacute;l&egrave;bre Omax d&eacute;velopp&eacute; au fil des ans par l'&eacute;quipe de repr&eacute;sentation musicale et offre d&eacute;sormais un environnement puissant et fiable pour la co-improvisation, la composition, les installations, etc. Ecrit en Max et Python, il dispose d'une impl&eacute;mentation modulaire multithread, de multiples joueurs interagissant sans fil (agents IA), d'une nouvelle interface utilisateur avec des tutoriels et de la documentation, ainsi que d'un certain nombre de nouvelles saveurs et de nouveaux param&egrave;tres d'interaction.</div>\r\n<div>-</div>\r\n<div>Dans la nouvelle version 2.6, il est &eacute;galement con&ccedil;u comme une biblioth&egrave;que Max, permettant &agrave; l'utilisateur de programmer des patchs Somax2 personnalis&eacute;s, permettant &agrave; chacun de concevoir son propre environnement et son propre traitement, impliquant autant de sources, d'acteurs, d'influenceurs et de moteurs de rendu que n&eacute;cessaire. Avec ces abstractions, mises en &oelig;uvre pour fournir une programmation et un flux de travail complets de type Max, l'utilisateur peut obtenir les m&ecirc;mes r&eacute;sultats que l'application Somax2 mais, gr&acirc;ce &agrave; leur architecture modulaire, il est &eacute;galement possible de construire des patchs personnalis&eacute;s et de d&eacute;bloquer des comportements d'interaction et de contr&ocirc;le in&eacute;dits. <br />Cette nouvelle version ajoute &eacute;galement de nouvelles fonctionnalit&eacute;s, comme l'enregistrement de corpus en temps r&eacute;el, la gestion de plusieurs r&eacute;gions et l'optimisation de la phase de battement.<br />Somax 2.6 fonctionne pour la premi&egrave;re fois en mode natif sur les processeurs ARM de Mac OS, et une version Windows est en cours de d&eacute;veloppement.<br /> -<br />Somax2 est d&eacute;velopp&eacute; par l'&eacute;quipe Music Representation de l'IRCAM et fait partie du projet ANR MERCI (Mixed Musical Reality with Creative Instruments) et du projet ERC REACH (Raising Co-creativity in Cyber-Human Musicianship).</div>\r\n<div>-</div>\r\n<div><span>Plus d'infos sur&nbsp;<a href=\"http://repmus.ircam.fr/somax2\">repmus.ircam.fr/somax2</a></span></div>\r\n<div>&nbsp;-</div>\r\n<div><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f827bc96a599627385779240af517e07.png\" /></span></div>\r\n<div></div>\r\n<div></div>\r\n<div></div>\r\n<div><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></div>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 748,
                "name": "co-creativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 159,
                "name": "Community",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1651,
                "name": "Improvisation, générativité et interactions co-créatives",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 841,
                "name": "machine-learning, improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 545,
                "name": "Repmus team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "somax-26-presentation-ircam-forum-workshops-2024-marco-fiorini-mikhail-malt",
        "pk": 2707,
        "published": true,
        "publish_date": "2024-02-01T10:18:12+01:00"
    },
    {
        "title": "Scenes from the Plastisphere - Rama Gottfried",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris.",
        "content": "<p>One of the key realisations I had during my musical research residency at&nbsp;Ircam in 2012, was the importance of multi-modal information in the process&nbsp;of spatial perception; both in terms of visual presence as well as the inherent&nbsp;forms and textures perceivable in a purely psychoacoustic timbre space.<br />In the piece&nbsp;Fluoresce&nbsp;developed during this time, a single cellist performs in&nbsp;a surrounding virtual space of speakers in HOA and WFS. The in the midst of&nbsp;swarms of invisible spatial textures, the presence of the performer created a&nbsp;powerful visual focal point in space which drew the audience&rsquo;s attention. I&nbsp;found that focusing on the performer as a concrete visual form, also creates a&nbsp;sense of solidification in sonic space, which warps and telescopes our&nbsp;perception of the spatial auditory scene.</p>\r\n<p><br />At the same time, while developing approaches for working with complex movements of groups&nbsp;of points to create interesting spatial textures and flows, I was confronted with&nbsp;the cognitive relationship between auditory and visual spatial perception:&nbsp;when seeing a texture of points has a strong influence on our ability to hear&nbsp;the details of the scene.&nbsp;And further, the sound itself contains important psychoacoustic cues of form in auditory space, height, depth,&nbsp;density, created purely from formations of timbre and contrapuntal texture.</p>\r\n<p><br />In the years following this time, I continued reflecting on these principles of&nbsp;spatial composition, and developed a series of scenographic/music-theater/live cinema works which explore the connection&nbsp;between&nbsp;visual and auditory spatial forms.&nbsp;Drawing on insights from object theater, field recording, and Foley sound practices, the&nbsp;pieces are constructed as a series of tableaus in different types of spaces and&nbsp;environments.&nbsp;In the work&nbsp;Scenes from the Plastisphere, a dramaturgy&nbsp;of spatial scales leads the audience from frontal, cinematic&nbsp;compressed spaces and microscopic scale images, developing towards a transformation of&nbsp;the stage scenography &mdash; where the screen becomes a floating kind of cloud puppet, three-dimensional projection form, and at the final climax a disco ball bursts open our&nbsp;experience of the space, coupled with a shift of auditory perspective into a large, completely-surrounding reverberant space.</p>",
        "topics": [],
        "user": {
            "pk": 1799,
            "forum_user": {
                "id": 1797,
                "user": 1799,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5543ddd7f5f903e8143316215674631b?s=120&d=retro",
                "biography": "Rama Gottfried's recent works aim to increase our sensitivity to the web of relations that connect humans and the other animate and inanimate entities that surround us. His pieces are conceived as scenographic worlds — bodies with voices that move and interact in physical and immaterial environments, constructed from the medias of acoustic and electronic instrumental performance, puppet-, object-, material-theater, live-cinema, and the site-specific performance context. Brought to life through the collaborative actions of human and nonhuman performers, the works attempt to absorb the audience and physical space, subtly expanding our awareness of detail.\r\n\r\nBorn in New York in 1977, Rama grew up in Burlington, Vermont, where he began instrumental and electronic music training at an early age, and pursued visual art studies before shifting focus to music performance and composition. After moving to New York City in 2001, he joined Ensemble Pamplemousse, collaborating with the group from 2003-2013 on developing approaches to the merging of sound, installation, and performance arts.",
                "date_modified": "2025-12-02T11:47:32.213617+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "rama",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "scenes-from-the-plastisphere",
        "pk": 2064,
        "published": true,
        "publish_date": "2023-02-15T16:39:32+01:00"
    },
    {
        "title": "Tweak de la semaine (W27)",
        "description": "Des couches successives de modulation avec rétroaction créent un vortex cosmique.",
        "content": "<div style=\"position: relative; padding-bottom: 65%; height: 0px; border-radius: 10px; overflow: hidden;\"><iframe width=\"300\" height=\"150\" style=\"border: none; position: absolute; top: 0px; left: 0px; width: 100%; height: 100%;\" src=\"https://tweakable.org/embed/examples/hypereikon_v1\" frameborder=\"0\"></iframe></div>\r\n<h4 id=\"create-your-own-tweakables-at-tweakable-org\" style=\"position: relative; padding-bottom: 65%; height: 0px; border-radius: 10px; overflow: hidden;\">Cr&eacute;ez votre propre Tweak sur&nbsp;<a href=\"http://tweakable.org/\">tweakable.org</a>.</h4>",
        "topics": [
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 426,
                "name": "Tweakable",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 427,
                "name": "Tweakoftheweek",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 127,
                "name": "Video",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18424,
            "forum_user": {
                "id": 18417,
                "user": 18424,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d36f7c122c36bf714b376ed2c132c929?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jwvsys",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tweak-of-the-week-w27",
        "pk": 716,
        "published": true,
        "publish_date": "2020-07-09T10:04:51+02:00"
    },
    {
        "title": "WebRTC and the Web Audio API as a Means for a Real Time Collaborative Performance Environment",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p>WebRTC's stable release in 2018 gave a significant improvement to internet communications and has since been adopted by a multitude of consumer applications. The framework allows for low latency audio and video streaming directly in the web browser and coupled with the Web Audio API can give a fully formed musical performance environment without the need for any native software beyond the browser itself. The following paper will illustrate the construction of a practical performance environment of a digital synthesizer and audio processing units built with the Web Audio API, coupled with WebRTC granting capabilities of streaming high quality audio data between users for compelling remote collaborations. With this exchange, two users can be patched into a central location without the requirement for either being present in the performance space, with all included audio software is housed directly in the web browser. Performers may also tool the environment to process and manipulate each other's audio streams, exchange visual data, and construct custom web elements, for an even greater expansiveness of live performance.</p>",
        "topics": [
            {
                "id": 752,
                "name": "javascript",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 158,
                "name": "Network",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 395,
                "name": "Web",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 896,
                "name": "webrtc",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 29323,
            "forum_user": {
                "id": 29295,
                "user": 29323,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/05d3912ee7bea1d91b3efdc1414e0aa5?s=120&d=retro",
                "biography": "Daniel McKemie is a composer, researcher, and percussionist based in New York City. He focuses on utilizing the internet and browser technology to realize a more accessible platform for multimedia art. His current work includes realizing historical instruments, musical tools, and audio processing units in the browser; and finding new ways of remote collaboration through WebRTC, WebSockets, and shared networks. His music has been performed in Europe, Asia, South America, and Australia; and his research on computer music and web-based audio/composition techniques have been presented or published internationally in conferences as part of the Korean Electro-acoustic Music Society, the Australasian Computer Music Association, the International Symposium on Computer Music Multidisciplinary Research, the Society of Electro-acoustic Music in the United States (SEAMUS), among others.",
                "date_modified": "2022-09-22T03:01:13+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "dmckemie77",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "webrtc-and-the-web-audio-api-as-a-means-for-a-real-time-collaborative-performance-environment",
        "pk": 1305,
        "published": true,
        "publish_date": "2022-09-15T10:24:35+02:00"
    },
    {
        "title": "Emerald Ash: tracing layers in the disintegration; a presentation on eliciting participation with videoscores by Terri Hron",
        "description": "This presentation uses the performance/installation Emerald Ash as a focus to discuss a number of recurring threads in my work and how they relate to technologies. Some of these include: performer-specific composition, videoscoring and iterative/multiformat works.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"Emerald Ash Installation\" src=\"https://forum.ircam.fr/media/uploads/user/f1411b3bdab02312340a7bf0e79464c3.png\" width=\"1489\" height=\"919\" /></p>\r\n<p>Presented by Terri Hron</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/terrihron/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>The performance/installation<span>&nbsp;</span><a href=\"https://vimeo.com/696165203\"><em>Emerald Ash</em></a><span>&nbsp;</span>is a meditation on the fate of the ash trees in Eastern North America, decimated by the Emerald Ash Borer, an invasive species of jewel beetle brought from Northeastern Asia in the 1980s. In Montreal, where I live, some 40,000 ash trees are being cut down to reduce the spread of the insect, and it is estimated that most ash trees will not survive. A few trees, especially in the cities&ndash;in Montreal around 8000&ndash;&ndash;are being vaccinated with the use of pesticide.</p>\r\n<p>It is an iterative work where each new version composts and augments the materials left behind by its former lives. This recycling happens both with the use of simple and complex technologies, some music-specific, others visual, which are intertwined and sometimes haphazard. Nevertheless, they are in a continuation of my work with videoscores since 2010, and in this presentation, I will highlight that evolution of notions of tracing, contingency, shared creative ownership and the limits of collaboration.</p>\r\n<p></p>",
        "topics": [
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1707,
                "name": "installation sonore",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2665,
                "name": "videoscore",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 102622,
            "forum_user": {
                "id": 102493,
                "user": 102622,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/TH_Fall2024.JPG",
                "avatar_url": "/media/cache/c1/e4/c1e4985bdd142f8fa8bf92fccf79cc12.jpg",
                "biography": "Terri Hron is a musician, a performer and a multimedia artist whose work explores relationships and belonging with places, people and scores. Using historical performance practice, field recording, invented ceramic instruments and videoscores, she often works in close collaboration with others and produces performances, gatherings and events. From 2017-2024, she was Executive Director of the Canadian New Music Network, where she developed programs focusing on pluralism and sustainability. She is now the director of the francophone magazine, Circuit, musiques contemporaines. Recent collaborators include Monty Adkins, Charlotte Hug, Paula Matthusen, Helen Pridmore and Jennifer Beattie, and commissions include suddenlyListen, Ensemble Paramirabó, GreyWing Ensemble, Dead of Night, Splinter Reeds and Ensemble Supermusique.",
                "date_modified": "2025-02-18T22:02:24.637918+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "terrihron",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "emerald-ash-tracing-layers-in-the-disintegration-a-presentation-on-eliciting-participation-with-videoscores-by-terri-hron",
        "pk": 3296,
        "published": true,
        "publish_date": "2025-02-18T21:50:46+01:00"
    },
    {
        "title": "Music for Headphones - Marco BIDIN, Fernando Maglia",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>Online presentation by <a href=\"https://forum.ircam.fr/profile/mbalea/\">Marco Bidin</a>&nbsp;(ALEA, Italy) and <strong>Fernando Maglia</strong> (Universidad Nacional de las Artes of Buenos Aires, Argentina)</p>\r\n<p><br />In this presentation, we will portrait the &ldquo;<strong>Music for Headphones</strong>&rdquo; project's social and psychological aims, as well as our search for a virtual concert-hall and its listening experience.&nbsp;</p>\r\n<p>One could transcend physical body restrictions and experience three-dimensional listening within a hybrid body in an augmented reality.</p>\r\n<p>The materials for our third &ldquo;Music for Headphones&rdquo; production emerge from a thorough musicological investigation of Latin American pre-Colombian music. We will also present the digital instruments created from the sound analysis of ethnic instruments.&nbsp;</p>\r\n<p>Furthermore, we will briefly discuss the technical aspects by showing the workspaces in OpenMusic, Max/MSP and how the material is finalised in a DAW. Special attention will be put on the binaural spatialisation techniques.</p>\r\n<p>Music for Headphones III is a production by ALEA, Associazione Laboratorio Espressioni Artistiche.</p>",
        "topics": [
            {
                "id": 1109,
                "name": "ALEA",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 134,
                "name": "Audiosculpt",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 954,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2,
                "name": "MaxMSP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 133,
                "name": "Sound synthesis and treatment",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20786,
            "forum_user": {
                "id": 20775,
                "user": 20786,
                "first_name": "Marco",
                "last_name": "Bidin",
                "avatar": "https://forum.ircam.fr/media/avatars/cv_pic.jpg",
                "avatar_url": "/media/cache/c8/12/c812194ab029dcbb2712b19a78eabf13.jpg",
                "biography": "Marco Bidin is a composer, artistic director, organist and harpsichord player from Italy.\n\nAfter his Organ degree in Italy, he studied Early Music performance in Trossingen and Contemporary Music performance in Stuttgart. Subsequently, under the guidance of Marco Stroppa, he completed the terminal degree (Konzertexamen) in Composition and the Certificate of Advanced Studies in Computer Music.\n\nMarco Bidin is active as an international composer and performer. He was invited in institutions like IRCAM (Paris, France), Shanghai Conservatory (China), Silpakorn University (Bangkok, Thailand) and Seoul National University (South Korea) among others.\n\nHe worked as a lecturer for Composition at the HMDK Stuttgart and as an organist for the Protestant Church in Stuttgart. 2010-2023 he was the artistic director of the italian-based NGO association ALEA. He is currently Associate Professor at the Electronic Instrument Engineering Department of the Xinghai Conservatory of Music in Guangzhou, China.",
                "date_modified": "2026-03-04T11:59:23.041276+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 988,
                        "forum_user": 20775,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    },
                    {
                        "id": 634,
                        "forum_user": 20775,
                        "date_start": "2023-11-16",
                        "date_end": "2024-11-16",
                        "type": 0,
                        "keys": [
                            {
                                "id": 155,
                                "membership": 634
                            },
                            {
                                "id": 406,
                                "membership": 634
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "mbalea",
            "first_name": "Marco",
            "last_name": "Bidin",
            "bookmarks": []
        },
        "slug": "music-for-headphones-iii",
        "pk": 2030,
        "published": true,
        "publish_date": "2023-01-28T06:47:42+01:00"
    },
    {
        "title": "Invocations - voix, textes anciens et électronique - Aldo Rodriguez",
        "description": "L'invocation vient du mot latin \"invocate\", qui signifie \"appeler\", \"demander\". Il s'agit d'une supplication adressée à Dieu, à une divinité, à un saint, à un être spirituel ou à des démons. Invocations - est une série de 6 arias pour soprano et électronique composées principalement avec des outils développés dans l'ircam. Transformation de l'électronique et de la voix en temps réel.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par: Aldo Rodriguez&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/aldorodriguez/\">Biography</a><br /><br /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/64f447f261f45dc383e138a3786e090d.png\" /></p>\r\n<div>\r\n<p><strong>L'invocation vient du mot latin \"invocate\", qui signifie \"appeler\", \"demander\". Il s'agit d'une supplication adress&eacute;e &agrave; Dieu, &agrave; une divinit&eacute;, &agrave; un saint, &agrave; un &ecirc;tre spirituel ou &agrave; des d&eacute;mons.</strong></p>\r\n<p>Les invocations font partie de la nature humaine et permettent d'entrer en contact avec ces &ecirc;tres &eacute;th&eacute;r&eacute;s. Les invocations sont &eacute;galement utilis&eacute;es pour attirer un esprit ou une force mal&eacute;fique. Elles peuvent &eacute;galement &ecirc;tre utilis&eacute;es comme des ordres ou des sorts pour contr&ocirc;ler ou obtenir des faveurs. Ainsi, l'invocation devient une auto-identification avec certains esprits. C'est une succession de mots, un message qui acquiert des qualit&eacute;s magiques.</p>\r\n<p>Ce projet comprend une s&eacute;rie d'&oelig;uvres compos&eacute;es pour soprano et &eacute;lectronique en temps r&eacute;el, inspir&eacute;es d'anciennes invocations, utilis&eacute;es jusqu'&agrave; aujourd'hui.</p>\r\n<p><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3c937870079641370619d45afd25114d.png\" /></p>\r\n<p>Invocations - est une s&eacute;rie de 6 arias pour soprano et &eacute;lectronique compos&eacute;es principalement avec des outils d&eacute;velopp&eacute;s dans l'ircam. Transformation de l'&eacute;lectronique et de la voix en temps r&eacute;el. Cette s&eacute;rie a &eacute;t&eacute; compos&eacute;e et d&eacute;di&eacute;e &agrave; la soprano Nadia Lamadrid.</p>\r\n<p></p>\r\n</div>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1815,
                "name": "Invocation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1817,
                "name": "Magic",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 277,
                "name": "Max 8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1816,
                "name": "soundbox",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 803,
            "forum_user": {
                "id": 803,
                "user": 803,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_7905.JPG",
                "avatar_url": "/media/cache/ff/ea/ffea79f9141ce7f0ba245cc8c8755b6d.jpg",
                "biography": null,
                "date_modified": "2025-07-28T06:04:09.972532+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "aldorodriguez",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "invocations-voix-textes-anciens-et-electronique",
        "pk": 2750,
        "published": true,
        "publish_date": "2024-02-16T17:53:50+01:00"
    },
    {
        "title": "Home Safety Tips to Protect Your Family Every Day",
        "description": "Discover the best home safety tips, fire prevention guides, childproofing advice, and security solutions at HomeSafetyBlog.site to keep your home and family safe every day.",
        "content": "<h1>🏡 Home Safety Tips to Protect Your Family Every Day</h1>\n<p>Keeping your home safe is essential for protecting your loved ones. Every day, small hazards can turn into accidents if we aren&rsquo;t careful. By adopting simple safety habits and precautions, you can make your home a secure place for everyone in your family.</p>\n<p>This guide provides practical tips that you can use daily to ensure the safety of your family and home.</p>\n<hr>\n<h2>🔒 1. Secure Doors and Windows</h2>\n<ul>\n<li>Always lock doors and windows, even when you are at home.</li>\n<li>Use <strong>deadbolt locks</strong> and <strong>window locks</strong>.</li>\n<li>Consider installing <strong>security cameras</strong> or a <strong>smart doorbell</strong> to monitor your home.</li>\n</ul>\n<blockquote>\n<p>A secure home is the first step to preventing intrusions and theft.</p>\n</blockquote>\n<hr>\n<h2>🔥 2. Fire Safety</h2>\n<ul>\n<li>Install <strong>smoke detectors</strong> in every room and check them monthly.</li>\n<li>Keep a <strong>fire extinguisher</strong> in the kitchen and near high-risk areas.</li>\n<li>Avoid leaving cooking unattended and unplug appliances when not in use.</li>\n<li>Educate your family on how to respond in case of a fire.</li>\n</ul>\n<hr>\n<h2>⚡ 3. Electrical Safety</h2>\n<ul>\n<li>Replace <strong>damaged wires and appliances</strong> immediately.</li>\n<li>Avoid overloading electrical outlets.</li>\n<li>Keep electrical devices away from water to prevent shocks.</li>\n<li>Use <strong>surge protectors</strong> for sensitive electronics.</li>\n</ul>\n<hr>\n<h2>🧴 4. Store Hazardous Items Safely</h2>\n<ul>\n<li>Keep medicines, cleaning products, and chemicals <strong>out of reach of children</strong>.</li>\n<li>Clearly label all hazardous items.</li>\n<li>Store sharp tools and equipment securely.</li>\n</ul>\n<hr>\n<h2>👶 5. Childproof Your Home</h2>\n<ul>\n<li>Use <strong>safety gates</strong> for stairs and high-risk areas.</li>\n<li>Cover electrical outlets with protective caps.</li>\n<li>Anchor heavy furniture to walls to prevent tipping.</li>\n<li>Keep small objects away from young children to avoid choking hazards.</li>\n</ul>\n<hr>\n<h2>🐾 6. Pet Safety</h2>\n<ul>\n<li>Store <strong>pet food and toxic substances</strong> safely.</li>\n<li>Make sure fences and gates are secure to prevent pets from escaping.</li>\n<li>Keep pets away from harmful plants and foods (e.g., chocolate).</li>\n</ul>\n<hr>\n<h2>🚪 7. Emergency Preparedness</h2>\n<ul>\n<li>Create a <strong>family emergency plan</strong> and share it with all members.</li>\n<li>Keep emergency numbers visible and accessible.</li>\n<li>Maintain a <strong>well-stocked first aid kit</strong>.</li>\n<li>Conduct regular <strong>fire and evacuation drills</strong>.</li>\n</ul>\n<hr>\n<h2>💡 8. Improve Home Lighting</h2>\n<ul>\n<li>Install <strong>motion-sensor lights</strong> around the home exterior.</li>\n<li>Keep hallways, stairs, and entryways well-lit.</li>\n<li>Use <strong>night lights</strong> in children&rsquo;s rooms and elderly spaces.</li>\n</ul>\n<hr>\n<h2>🛠️ 9. Regular Home Maintenance</h2>\n<ul>\n<li>Inspect gas lines and electrical wiring periodically.</li>\n<li>Fix leaks, cracks, and damaged infrastructure promptly.</li>\n<li>Clean chimneys, vents, and drains regularly.</li>\n<li>Test locks and alarms monthly.</li>\n</ul>\n<hr>\n<h2>🧠 10. Awareness and Vigilance</h2>\n<ul>\n<li>Be cautious with strangers and suspicious activities.</li>\n<li>Teach children basic safety rules.</li>\n<li>Avoid sharing personal details online or with unknown visitors.</li>\n<li>Stay alert to potential hazards every day.</li>\n</ul>\n<hr>\n<h2>✅ Conclusion</h2>\n<p>Home safety is an ongoing process that requires daily attention. By implementing these tips, you can protect your family from accidents, emergencies, and security threats. Small, consistent actions make a huge difference in creating a safe and secure home.</p>\n<hr>\n<h3>🔖 Tags</h3>\n<p>Home Safety, Family Protection, Fire Safety, Child Safety, Home Security, Daily Safety Tips, Accident Prevention, Emergency Preparedness, Safe Home, Household Safety<a href=\"https://homesafetyblog.site/\">https://homesafetyblog.site/</a></p>",
        "topics": [],
        "user": {
            "pk": 166275,
            "forum_user": {
                "id": 166039,
                "user": 166275,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/home_safety_resized_i6b0y7H.png",
                "avatar_url": "/media/cache/e4/6e/e46e9ce95cecec2da4e851a327bdbe60.jpg",
                "biography": null,
                "date_modified": "2026-03-31T18:21:09.320991+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "homesafety",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "home-safety-tips-to-protect-your-family-every-day",
        "pk": 4564,
        "published": false,
        "publish_date": "2026-03-31T18:19:13.417416+02:00"
    },
    {
        "title": "Virtuosic Embodiment of a Modalys String through  Color Tracking Technology by Yongwoo Lee",
        "description": "This article introduces the use of Modalys (a physical modeling synthesis tool) String in live settings by utilizing color tracking technology through MaxMSP/Jitter. It explores the approach of creating highly detailed and creative sounds by adjusting parameters such as the density, radius, and length of various physical materials. By changing the material and tone of the strings, it also discusses the diverse possibilities of improvisation in a live environment, as well as methods for effectively performing and composing.",
        "content": "<h2></h2>\r\n<p>ABSTRACT</p>\r\n<p><em>This paper introduces the use of Modalys (a physical modeling synthesis tool) String in live settings by utilizing color tracking technology through MaxMSP/Jitter. This essay explores the approach of creating highly detailed and creative sounds by adjusting parameters such as the density, radius, and length of various physical materials.</em> <em>Furthermore, it discusses methods for effectively performing and composing in a live environment.</em></p>\r\n<p>1. INTRODUCTION</p>\r\n<p>Modalys String is divided into a mono-string object, primarily used for plucking, and a bi-string object, used for bowing. Each object generates sound by receiving values for horizontal and vertical positions, simulating the actual friction between the bow and the string. Additionally, factors such as weight, rosin, and access position significantly influence sound generation.</p>\r\n<p>2. PARAMETERS OF MODALYS STRING FOR GENERATING SOUND</p>\r\n<p>In the experimentation of bowing access points for Modalys String, several parameters were analyzed. The vertical position ranged from -0.001 to 0.001, where values below the minimum (upper side) produced a light, harmonic, sul tasto sound, while values at the maximum (lower side) created a harsh, tough, sul ponticello sound. The horizontal position varied between -4 and 4, determining the lateral smooth movement. Weight was tested between 0.7 and 1.0, affecting the pressure applied to the bow and thereby influencing the dynamics of the sound. Finally, the access position ranged from 0.5 to 0.01, controlling the precise point of contact on the string and significantly impacting the tone quality.</p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/user/c3d9e68635f280f2685de88ace0d070a.tiff\" /></p>\r\n<p><strong>Figure </strong><strong>1</strong>. MaxMSP patching for the primary parameters for Modalys.</p>\r\n<p>3. A DIVERSITY OF MATERIALS FOR MODALYS STRING</p>\r\n<p>Modalys creates strings from a variety of materials by adjusting the density and Young's modulus values of different metals, woods, and synthetic materials. The range of material properties studied at IRCAM spans from spider silk to uranium, allowing for extremely nuanced changes in sound. This information can be input into Max/MSP, making it easy to implement and manipulate sounds in real-time.</p>\r\n<p>Physical modeling synthesis with Modalys requires recomputation each time parameters change, which can lead to clicking and other issues. Traditionally, the melt-hybrid object has been effectively used when changing materials or pitches. Therefore, I connected the two strings in a hybrid configuration and implemented a separate mono-string for plucking, thus completing the Modalys coding.</p>\r\n<p><strong><img src=\"https://forum.ircam.fr/media/uploads/user/c454ffea1a1db30f9c2f91f93fe3bf49.tiff\" /></strong></p>\r\n<p><strong>Figure 2</strong>. Modalys coding in the Max/MSP environment.</p>\r\n<p>Additionally, materials and pitch changes through the hybrid system are designed so that while String 1 is producing sound, String 2 undergoes modifications over time. These changes and visual information are then patched and displayed to the performer.</p>\r\n<p><strong><img src=\"https://forum.ircam.fr/media/uploads/user/8b187c8205aebe69a2c99ded507d4403.tiff\" /></strong></p>\r\n<p><strong>Figure 3</strong>. Example of the pictures of the pitches and materials in the hybrid object section.</p>\r\n<p>4. COLOR TRACKING APPROACH VIA JITTER</p>\r\n<p>Important parameters of Modalys String are adjusted by tracking colors captured by the camera through Jitter. Specifically, the blue globe on the left controls the hybrid position, enabling changes to the pitches and material properties between the two strings. The green and red globe on the right is used, with the red side for bowing the string and the green side for plucking the string. Three-dimensional gesture control is used for manipulating these parameters through Jitter.</p>\r\n<p><strong><img src=\"https://forum.ircam.fr/media/uploads/user/f219fd71a7ec6c93e0ae7791e348dc3f.tiff\" /></strong></p>\r\n<p><strong>Figure 4</strong>. RGB globes for the color tracking.</p>\r\n<p>The three-dimensional gesture control for Modalys String operates as follows: the horizontal value obtained through Jitter adjusts parameters related to bowing, such as the horizontal and vertical positions and the weight. The vertical position value modifies the rosin, access position, and timbre (including constant-loss and frequency-loss).</p>\r\n<p>Finally, the third dimension, the Z-axis, uses the amount of color tracking as its value. This is represented as white on a black screen to provide an instant visualization of the amount.</p>\r\n<p><strong><img src=\"https://forum.ircam.fr/media/uploads/user/254651c19e528cb47bfeae50cd047ad3.tiff\" /></strong></p>\r\n<p><strong>Figure </strong><strong>5</strong>. The degree of color exposure is represented in black and white.</p>\r\n<p>5. DETAILED APPROACH TO IMPROVISING STRATEGIES</p>\r\n<p>The strategies applied for using this patch for live improvisation are as follows:</p>\r\n<p>The green and red globe on the right-hand side is used to control different aspects of the string. Specifically, the red side is designated for bowing the string, while the green side is used for plucking. Since these colors are opposites, the Z-axis value, which reflects the amount of green color, indicates their contrast. As a result, an increase in the Z-axis value of the green color reduces the amplitude of the bowing sound, while the red color, being its opposite, enhances it. This method effectively intercrosses the bowing and plucking sounds by utilizing the color contrast to adjust their relative intensities.</p>\r\n<p>Additionally, by using a MIDI pedal for pedaling, random materials and pitches will appear. These images and pitches will provide visual information that influences the performer in a virtuosic manner.</p>\r\n<p>Specifically, the left hand (blue color globe) controls pitch and material changes by smoothly adjusting the hybrid position.</p>\r\n<p>Moreover, anticipated benefits include precise cues for sound processing (such as AM, RM, etc.), the ability to work with limited scales for specific pitches, and the integration of audiovisual elements with material properties. This approach allows even those who do not know how to play a string instrument to perform immediately, while a deeper understanding of material properties opens up numerous possibilities for creating innovative music.</p>\r\n<p>6. CONCLUSIONS</p>\r\n<p>Modalys is known to be a challenging coding program to handle in live settings. However, efficient coding through color tracking in a controlled environment, with a focus on key parameters, has proposed the possibility of effective performance and creation in live scenarios. This patch is designed to be usable even by those who do not have prior knowledge of Max/MSP/Jitter, or Modalys, and will become even more efficient through optimization and updates on some details.</p>\r\n<p><strong>This talk will be presented during the Ircam Forum Workshop in Seoul at the Seoul National University&nbsp;</strong></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-seoul-6-8-november-2024/\">More info on the event</a></p>",
        "topics": [
            {
                "id": 2247,
                "name": "COLOR TRACKING",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 75,
                "name": "Jitter",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 31469,
            "forum_user": {
                "id": 31421,
                "user": 31469,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/%EC%9D%B4%EC%9A%A9%EC%9A%B0.jpeg",
                "avatar_url": "/media/cache/e5/09/e509a55ca17805249bddd1d9be5b33fc.jpg",
                "biography": "Yongwoo Lee is a composer deeply interested in humanities and aesthetics. During his undergraduate studies, he majored in history and culture content development while minoring in composition. Throughout his involvement as a researcher at the History and Culture Archive Center, committee member at the Daegu City, cultural interpreter, conscripted firefighters agent (CFA) and researcher at CREAMA (Center for Research in Electro-Acoustic Music and Audio), he gained invaluable life experiences.\nHis artistic aim revolves around integrating various elements of humanities into music. For instance, his compositional methodologies include translating poetry into music based on Korean phonetics by mirroring it in tones or applying 20th-century music techniques sequentially in line with historical developments. Having completed his master's degree in composition, he recently delved into electronic music. His works were selected and performed at HEATWAVE, DCMC 2019, ICMC 2023, 2024, 12th  CUHK (The Chinese University of Hong Kong, Shenzhen) Salon Concert and FEST-M 2024 (KEAMS).",
                "date_modified": "2026-02-02T18:37:29.789590+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 950,
                        "forum_user": 31421,
                        "date_start": "2024-10-04",
                        "date_end": "2025-10-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "yongwoo",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3012,
                    "user": 31469,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "virtuosic-embodiment-of-a-modalys-string-through-color-tracking-technology",
        "pk": 3012,
        "published": true,
        "publish_date": "2024-10-04T18:17:36+02:00"
    },
    {
        "title": "Workshop ASAP by Pierre Guillot",
        "description": "In this workshop, Pierre Guillot will present the new features of the ASAP plug-ins collection, and in particular, the Psycho Filter and Stretch Life plug-ins. You'll be invited to explore the possibilities offered by the ASAP collection on computers and iPads!",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Pierre Guillot</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/guillot/\" target=\"_blank\">Biography</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/projects/detail/asap/\" target=\"_blank\">ASAP Project</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<div class=\"c-content__button\"><span>The Psycho Filter plug-in lets you draw shape filters on the sound spectrogram and control their gain and fade. The sound representation and user interface enable you to create highly complex and precise surface filters to reduce or enhance specific parts of the sound's spectral components, to compensate for annoying artifacts in the sound, to isolate certain specificities of the sound and to creatively transform the sound.</span></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/screenshot_2025-03-06_at_14.08.03.png\" alt=\"\" width=\"1256\" height=\"629\" /></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><span>The Stretch Life plug-in allows you to stretch and compress sound in creative and original ways. Thanks to an intelligent algorithm, resulting from many years of work and experimentation, the temporal transformations offer an exceptional rendering quality by preserving the pitch of the harmonic components of the sound and the random characteristics of the noisy parts. Thanks to the ergonomic interface allowing real-time visualization of the transformations on the spectrogram, the plugin opens up new creative and original possibilities. The many marker editing operations provide fast and efficient solutions for sound engineers to synchronize tracks, resample audio files, and more.</span></div>\r\n<div class=\"c-content__button\"><span></span></div>\r\n<div class=\"c-content__button\"><span><img src=\"/media/uploads/screenshot_2025-03-06_at_14.10.14.png\" alt=\"\" width=\"1197\" height=\"701\" /></span></div>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "workshop-asap-by-pierre-guillot",
        "pk": 3333,
        "published": true,
        "publish_date": "2025-03-06T17:11:47+01:00"
    },
    {
        "title": "Video game technology as an AV art and culture dissemination strategy: Case study \"Journey to the center of the sound\" in the MONOM 4D sound system - The Acoustic Heritage Collective",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p><span>By <a href=\"https://forum.ircam.fr/profile/acousticheritagecoll/\">Acoustic Heritage Collective</a></span></p>\r\n<p><span></span></p>\r\n<p><span>This presentation will expose our strategies for creating immersive virtual reality simulations of artwork in 3D-modeled multi-channel spaces using video game technology. In this presentation we will show our workflow in order to create our experimental VR prototype based on our experience with our immersive piece &ldquo;Journey to the center of the Sound&rdquo; in the MONOM studios.&nbsp;</span></p>\r\n<p><a href=\"https://patrimoniacustic.cat/journey/home.html\"><span>https://patrimoniacustic.cat/journey/home.html</span></a></p>\r\n<p><span><img alt=\"\" src=\"/media/uploads/user/f640d853c97177c6b329324bbe9a6f7a.png\" /></span></p>\r\n<p><span>Some of the topics of this presentation will include techniques in 3D modeling, Insitu recording of Ambisonic Room Impulse Response, Electro Acoustic System Simulation, AV sources rendering, Virtual Audio Interfaces, VR design, Real time Auralization, VR Application exportation and Beta testing.&nbsp;</span></p>\r\n<p><span>As sound and acoustic artists and musicians, we focus our work on the study of the sound/space relationship. Space modeling through Ambisonic Impulse Responses in-situ recording takes spatial perception to a higher level. We work mainly with artists and musicians, but also these types of simulations can have application in the area of cultural heritage, Archaeoacoustics or even for educational and tourism purposes.&nbsp;</span></p>\r\n<p><strong>This Installation will be presented during</strong></p>\r\n<p><strong>Forum Ircam Workshop 29-31 March 2023</strong></p>\r\n<p class=\"wys-highlighted-paragraph\"><a href=\"https://forum.ircam.fr/collections/detail/ateliers-du-forum-ircam-edition-speciale-spatialisation-arvr/\">https://forum.ircam.fr/collections/detail/ateliers-du-forum-ircam-edition-speciale-spatialisation-arvr/</a></p>",
        "topics": [
            {
                "id": 1113,
                "name": "auralization",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 446,
                "name": "Convolution",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 622,
                "name": "Immersiveaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1114,
                "name": "Impulse Response",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 899,
                "name": " spatialization ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1115,
                "name": "Unreal Engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32587,
            "forum_user": {
                "id": 32539,
                "user": 32587,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/logoOK01LOW.jpg",
                "avatar_url": "/media/cache/a2/48/a248a0a6d01b7c5b0a27cd160b4f9d06.jpg",
                "biography": "Acoustic Heritage Collective is the European extension of the former Catalonian project Patrimoni Acustic. We are an open group of culture and arts professionals interested in enhancing acoustic and sound heritage. Our lines of action include:\n\nSafeguarding of Acoustic and Sound Heritage\nWorkshops and lectures\nResearch and Dissemination\nDigitization of heritage\n\nCurrently our collective is formed by:\n\nGinebra Raventós de Volart (Sound Artist, Poet and Psychologist)\nExecutive Production, Diffusion Management and audiovisual documentary recording\n\nEmilio Marx (Acoustic Engineer, Sound Technician, Sound Artist)\nAcoustic Consulting and Technical Production\n\nEdgardo Gomez (Acoustic Engineer, Sound Technician, Sound Artist)\nAcoustic Consulting and Technical Production\n\nJoan Lavandeira  (Engineer, illuminator and digital artist)\nTechnology advice and realization of 3d diffusion material\n\nMathias Klenner  (Architect and Sound Artist)\nArchitectural modeling and graphic design\n\nPaolo Carretero (Web programmer)\nWebsite implementation\nTechnology advice and realization of 3d diffusion material",
                "date_modified": "2025-03-25T08:07:41.261329+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "acousticheritagecoll",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "video-game-technology-as-an-av-art-and-culture-dissemination-strategy-case-study-journey-to-the-center-of-sound-in-the-monom-4d-sound-system",
        "pk": 2035,
        "published": true,
        "publish_date": "2023-02-02T19:51:33+01:00"
    },
    {
        "title": "Tutoriel Modalys n°6 : I Felt the Sapphire",
        "description": "Sixième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p><strong>Dans ce tutoriel, nous frappons une plaque rectangulaire en utilisant une connexion en feutre.</strong></p>\r\n<p></p>\r\n<p style=\"text-align: justify;\">Les objets de la plaque sont probablement parmi mes pr&eacute;f&eacute;r&eacute;s dans Modalys. Les possibilit&eacute;s infinies des propri&eacute;t&eacute;s des mat&eacute;riaux en font un terrain de jeu formidable et quelle meilleure fa&ccedil;on de le c&eacute;l&eacute;brer qu'avec un peu de bling, en faisant la plaque en saphir ;-). La connexion en feutre ajoute une grande caract&eacute;ristique, bien que les diff&eacute;rents param&egrave;tres restent un peu myst&eacute;rieux dans leurs valeurs par d&eacute;faut. Bien qu'il ressemble &agrave; un maillet en feutre, le r&eacute;sultat auditif n'est pas toujours clair, comme le d&eacute;crit la documentation.</p>\r\n<h6 style=\"text-align: justify;\"></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/KRzX6NHY5Ys\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: left;\"><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.&nbsp;</strong></p>",
        "topics": [
            {
                "id": 466,
                "name": "Felt",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 133,
                "name": "Sound synthesis and treatment",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n6-i-felt-the-sapphire",
        "pk": 728,
        "published": true,
        "publish_date": "2020-09-29T10:00:00+02:00"
    },
    {
        "title": "\"Generative Music from the Quantum World\" by Jean-Claude Heudin",
        "description": "Quantum algorithms challenge the normal way of creating music. Beyond the inherent stochastic nature of a quantum system, a new universe of inspiration and possibilities emerge. Integrated within the ANGELIA software, a hybrid generative AI we have developed, our musical experiments are based on the concept of Quantum Notes, called Qunotes, we have recently introduced. A Qunote is a specific musical concept that follows the principles of quantum mechanics: superposition, coherence/decoherence, and entanglement. Qunotes are similar to Qubits but with musical notes instead of binary information. In a classical composition, a note would have to be in a single state, ie. one pitch value (if we consider this parameter only). In contrast, the quantum state of a qunote is represented by a linear superposition of a defined number of pitch values. It’s only when the piece of quantum music is played that a qunote collapses into a defined state. Each time you play the same qunote, you may obtain a different note depending on the probability amplitudes. This results in many possible interpretations of the same score. Therefore, you are composing with probability amplitudes instead of fixed notes.",
        "content": "<h5><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<h3></h3>\r\n<h3>Quantum Music</h3>\r\n<p>Quantum theory is a promising source of inspiration for experimental electronic music [1]. In this framework, we have recently introduced the concept of Quantum Notes [2]. A Qunote is a specific musical concept that follows the principles of quantum mechanics: superposition, coherence/decoherence, and entanglement. Qunotes are like Qubits but with musical notes instead of binary information. In a classical composition, a note would have to be in one state, ie. one pitch value, if we consider only this parameter. In contrast, the quantum state of a qunote is represented by a linear superposition of a defined number of pitch values. It&rsquo;s only when the piece of quantum music is played (measured) that a qunote collapses into a defined state. Each time you play the same qunote, you may obtain different notes depending on the probability amplitudes. This results in many possible interpretations of the same score. As a consequence, when composing, you need to think at all the possible interpretations. You are composing with probability amplitudes instead of fixed notes.</p>\r\n<p>Qunotes are implemented using qubits on a quantum computer. However, publicly available multi-qubit computing resources are very limited. As a consequence, we simulate quantum properties with a set of dedicated algorithms implemented within the Angelia software. Also, we have developed an optical one-qubit device that enables to experiment with real quantum effects. The optical quantum device is based on the KLM protocol [3].</p>\r\n<h3>About Angelia</h3>\r\n<p>Angelia is an artistic and research project developed since 2017 by Jean-Claude Heudin. Angelia is the contraction of &ldquo;Angel&rdquo; and &ldquo;IA&rdquo;, the french acronym for Artificial Intelligence. The aim of the project is to enhance the creativity of the artist for composing, and to augment his capabilities when performing. Angelia is a hybrid generative AI [4]. The music is composed using a dedicated high-level programming language which enables to choose for each instruction among different bioinspired algorithms like a Corpus-based Genetic Algorithm, Cellular Automata, among many others [5].<br />Most AI music systems generate music with no feedback from the produced sounds. In parallel with the generation, Angelia analyzes the produced music in order to generate stimuli that update an &ldquo;emotional metabolism&rdquo;. The resulting emotional state influences parameters that modify the expressiveness of the interpretation. This emotional metabolism is inspired by our previous works on emotional virtual characters [6].</p>\r\n<h3>Ethics</h3>\r\n<p><span>The Angelia project is developed with a strong ethical approach. First, it is designed to be played by an artist, not to replace him. Music can be freely listened on an independent and open music platform. </span>The project strictly follows copyright laws and regulations: only public domain data are used for machine learning. <span>Angelia is also environmentally friendly: it runs on a simple tablet with minimal energy consumption, and does not require any heavy distant computing or data center.</span></p>\r\n<h3>&nbsp;References</h3>\r\n<p>1. Miranda, E.R., Quantum Computer Music &ndash; Foundations, Methods and Advanced Concepts, Springer, 2022.</p>\r\n<p>2. Heudin, J.-C, Quantum Music with Qunotes, researchgate.net/publication/387503808, 2024.</p>\r\n<p>3. Knill, E., Laflamme, R. Milburn, G.J., A scheme for efficient quantum computation with linear optics, Nature. 409 (6816), Nature Publishing Group: 46&ndash;52, 2001.</p>\r\n<p>4. Heudin, J.-C., Angelia: An Emotional Generative Algorithmic Intelligence for Contemporary Electronic Music, 27th Generative Art Int. Conf., Hosted by UNESCO, Venice, Italy, 2024.</p>\r\n<p>5. Heudin, J.-C., Angelia: An emotional AI for electronic music, researchgate.net/publication/368513976, Paris, 2024.</p>\r\n<p>6. Heudin, J.-C., A Bio-inspired Emotion Engine in the Living Mona Lisa, in Proceedings of the Virtual Reality Int. Conf., Laval, 2015, 1&ndash;4.</p>\r\n<p><img src=\"/media/uploads/call-parisenghien-jc-heudin-projectpicture1.jpg\" alt=\"\" width=\"807\" height=\"484\" /></p>",
        "topics": [
            {
                "id": 1916,
                "name": "#ai",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3958,
                "name": "#algorithmicintelligence",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3960,
                "name": "#ambientmusic",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3959,
                "name": "#experimentalelectronicmusic",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3957,
                "name": "#quantummusic",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 104600,
            "forum_user": {
                "id": 104468,
                "user": 104600,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/7445998D-E8B9-4259-85BD-54412A513D3F.jpeg",
                "avatar_url": "/media/cache/3e/fd/3efd6cd122d9b8241608dff3851a387e.jpg",
                "biography": "Jean-Claude Heudin is a scientist, composer, and writer. He holds a PhD and the Director of Research Degree from the University of Paris-Sud. He is the author of numerous international scientific papers as well as several books in the fields of Artificial Intelligence and Complexity Science published by Odile Jacob, and Science eBook he founded. His current research focuses on Emotional AI and Contemporary Electronic Music with the Angelia project.",
                "date_modified": "2026-01-26T18:53:26.425424+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jcheudin",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4127,
                    "user": 104600,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "generative-music-from-the-quantum-world",
        "pk": 4127,
        "published": true,
        "publish_date": "2025-12-29T16:58:32+01:00"
    },
    {
        "title": "The World of Freedom - Tiange Zhou",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>The World of Freedom is an immersive virtual space. It provides possibilities for people to empathetically imagine a &ldquo;free&rdquo; living sociology at the post-pandemic Anthropocene. Technologies support people to live in their identical personal spaces, meanwhile access shared information and areas with neighbors, even those with different faith and beliefs.Since we are all inside this pandemic bubble, after most people stay at home for a couple of months, we start emerging a global-size collective memory, which makes people more empathetically understanding others&rsquo; situations. Meanwhile, more and more people have to learn and take experience virtually. The attention of empathy and the new work-from-home mode evokes the initial idea of this virtual reality experience. We start to ask how people could learn and think more effectively in this brand new virtual age? The time of post-pandemic might never pass. Some phenomena supposed as temporary might become the normality for the world. &nbsp;We use Unity to build up an immersive and empathetic space that embodies a hypothetical argument of a social dilemma into a virtual manifestation. In this way, we wish to make the experience as a heuristic innovation for the post-pandemic Anthropocene. People might be able to figure out the most meaningful answer by wearing the same shoes. The social distance could also be virtually controlled in this program by counting if the number of participants overload spaces.</p>",
        "topics": [],
        "user": {
            "pk": 21145,
            "forum_user": {
                "id": 21134,
                "user": 21145,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/tiange_zhou_picture.jpg",
                "avatar_url": "/media/cache/b9/2e/b92e9acb7d31a2c8a504750e31223ddc.jpg",
                "biography": "Dr. Tiange Zhou is the director of the Art & Technology Lab at the\nSchool of Future Design, Beijing Normal University. She is a\ncomposer, interdisciplinary artist, and researcher. She earned\nher Bachelor's degree from the Manhattan School of Music,\nfollowed by a Master's degree from Yale University, and a Ph.D.\nfrom UCSD. Her works have received recognition, including the\nAmerican Filmatic Arts Awards for Best Sound Design in Short\nFilms, First Prize at the Kirkos Kammer International Chamber\nMusic Composition Competition, and a Gold Winner of the\nHermes Creative Awards. She has served as a course lecturer\nand collaborative artist at Yale College, the University of\nCalifornia, San Diego, and the Harvard Chinese Art Media Lab\n(CAM Lab) before relocating to China. Her research has been\npublished by IEEE-ICME, IRCAM FORUM, SIGGRAPH Asia,\nand CRC Press of Taylor & Francis Group.",
                "date_modified": "2025-03-29T05:41:42.124253+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 992,
                        "forum_user": 21134,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "tiangezhou",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 21145,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "the-world-of-freedom",
        "pk": 2096,
        "published": true,
        "publish_date": "2023-02-28T17:16:20+01:00"
    },
    {
        "title": "SPAT Devices par Music Unit",
        "description": "Présentation pendant les Ateliers du Forum Paris 2023 / Music Unit produit la collection SPAT, plugins Max For Live distribués par Ableton.",
        "content": "<p><span>Les plugins<span>&nbsp;</span></span><a href=\"https://www.ableton.com/fr/packs/spat-bundle/\">SPAT</a><span>&nbsp;permettent d'agencer et d&eacute;placer des sources sonores dans des espaces audio r&eacute;els ou virtuels, en 2D ou 3D, gr&acirc;ce &agrave; des moteurs de spatialisation avanc&eacute;s, bas&eacute;s sur le processeur Spatialisateur d&eacute;velopp&eacute; &agrave;&nbsp;</span><a href=\"https://www.ircam.fr/\">l'IRCAM</a><span>&nbsp;depuis bient&ocirc;t trois d&eacute;cennies.</span><br /><br /><img alt=\"\" src=\"/media/uploads/user/6a7fac8a99cd6b475c4b13dc4c01c997.png\" /><br /><br /><span>Les plugins sont propos&eacute;s en deux packs : SPAT Multichannel et SPAT Stereo.</span><br /><br /><span>SPAT Multichannel est destin&eacute; aux artistes, producteurs et ing&eacute;nieurs du son qui souhaitent tirer le meilleur parti de la configuration multicanale de leur studio ou salle de concert.</span><br /><br /><span>SPAT Stereo est destin&eacute; &agrave; celles et ceux qui disposent de configurations st&eacute;r&eacute;o simples (haut-parleurs, casque audio) et qui souhaitent tout de m&ecirc;me int&eacute;grer des techniques de spatialisation de haut niveau dans leurs productions.</span><br /><br /><span>SPAT Devices est d&eacute;velopp&eacute; par&nbsp;</span><a href=\"http://www.musicunit.fr/music-unit-fr/manuel-poletti\">Manuel Poletti</a><span>&nbsp;du studio&nbsp;</span><a href=\"http://www.musicunit.fr/musicunit-fr\">Music Unit</a><span>, en utilisant la biblioth&egrave;que SPAT Max d&eacute;velopp&eacute;e par l'&eacute;quipe&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac\">Espaces Acoustiques et Cognitifs</a><span>&nbsp;- STMS (Ircam, CNRS, Sorbonne Universit&eacute;, Minist&egrave;re de la culture) et diffus&eacute;e par&nbsp;</span><a href=\"https://ircamamplify.com/\">Ircam Amplify</a><span>.</span><br /><br /><br /><img alt=\"SPAT devices in action\" src=\"/media/uploads/user/652b3ea2f2ea8143749c0a25bb4e4fa1.png\" /></p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1131,
                "name": "Max For Live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1132,
                "name": "Music Unit",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 23,
            "forum_user": {
                "id": 23,
                "user": 23,
                "first_name": "Manuel",
                "last_name": "Poletti",
                "avatar": "https://forum.ircam.fr/media/avatars/PortraitMU_Manuel_Poletti.jpeg",
                "avatar_url": "/media/cache/25/a9/25a94fa5eedfb0e20cf188183156a531.jpg",
                "biography": "Sound artist and composer, computer music designer at IRCAM and consultant at Cycling'74, Manuel Poletti is in charge within Music Unit of the development of large format sound installation projects and software technologies dedicated in particular to augmented instrument, computer-assisted composition and sound spatialization. Manuel collaborates regularly with many leading contemporary artists with whom he creates elaborate sound systems and content in the fields of stage, art, design and architecture.",
                "date_modified": "2026-02-05T12:39:13.481208+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 4,
                        "forum_user": 23,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "poletti",
            "first_name": "Manuel",
            "last_name": "Poletti",
            "bookmarks": []
        },
        "slug": "spat-devices-by-music-unit",
        "pk": 2047,
        "published": true,
        "publish_date": "2023-02-09T10:21:01+01:00"
    },
    {
        "title": "REACHing Space - Co-Creative Soundscapes",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div><span color=\"#0433ff\" style=\"color: #0433ff;\"><span></span></span></div>\r\n<h2>Spatial Awareness and Influence in Somax2 + Somax2Collider presentation</h2>\r\n<h5>Speakers&nbsp;: Jos&eacute; Miguel Fernandez, Marco Fiorini, Alberto Gatti</h5>\r\n<h5></h5>\r\n<div>This sound installation, presented during the Forum workshop, explores dynamic auditory landscapes using the Somax2 system with spatial awareness and agent interaction. By integrating a library of audio descriptors implemented in Max/MSP for automated ambisonics spatialization, the system enables co-creative agents to respond to their positions in space and influence each other&rsquo;s behavior. Participants can interact with the soundscape, shaping it in real time by controlling both agents and spatial elements through movement and positioning. Additionally, the new Somax2Collider version (implemented in SuperCollider) controls self-contained wireless small speakers placed around the room to diffuse other generative agents, creating a multi-point auditory experience adaptable to any concert venue or listening space.</div>\r\n<div></div>\r\n<div>\r\n<p><span style=\"text-decoration: underline;\"><strong>Date :</strong></span></p>\r\n<p><span>March 26th, 14:00 - 18:00 (Studio 2)</span></p>\r\n<p><span><img src=\"/media/uploads/somax-screen-641x855.png\" alt=\"\" width=\"641\" height=\"855\" /></span></p>\r\n</div>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "reaching-space-co-creative-soundscapes-spatial-awareness-and-influence-in-somax2-somax2collider-presentation",
        "pk": 3297,
        "published": true,
        "publish_date": "2025-02-19T10:32:39+01:00"
    },
    {
        "title": "Interior Design Services: Transform Your Space with Studio Rivet",
        "description": "Professional interior design services by Studio Rivet to transform your home or office into a functional and stylish space. Customized solutions tailored to your needs and modern lifestyle.\n",
        "content": "<p><span style=\"\"><img alt=\"Interior Design Services\" src=\"https://forum.ircam.fr/media/uploads/user/9941948a9b4f1777b80284d5ef1dcd92.png\"></span></p>\n<p><span style=\"\">Creating a space that reflects your personality while maintaining functionality is no easy task. Whether it&rsquo;s a home, office, or commercial setup, the right design can completely transform how a space looks and feels. This is where professional </span><a href=\"https://studiorivet.in/interior-design/\"><strong>Interior Design Services</strong></a><span style=\"\"> come into play.</span></p>\n<p><span style=\"\">At </span><strong>Studio Rivet</strong><span style=\"\">, we believe that great design is more than just aesthetics. It is about creating spaces that inspire, function seamlessly, and elevate everyday living.</span></p>\n<h2><strong>What Are Interior Design Services?</strong></h2>\n<p><span style=\"\">Interior design services involve planning, designing, and enhancing interior spaces to achieve a healthier and more aesthetically pleasing environment. These services cover everything from layout planning and furniture selection to lighting, color schemes, and d&eacute;cor styling.</span></p>\n<p><span style=\"\">A professional interior designer ensures that every element in your space works together harmoniously while meeting your lifestyle or business needs. From concept to completion, interior design services help bring structure, creativity, and efficiency into your project.</span></p>\n<h2><strong>Why Choose Professional Interior Design Services?</strong></h2>\n<h3><strong>1. Expertise and Creativity</strong></h3>\n<p><span style=\"\">Professional designers bring industry knowledge and innovative thinking. At Studio Rivet, every design is carefully curated to balance functionality with modern aesthetics.</span></p>\n<h3><strong>2. Space Optimization</strong></h3>\n<p><span style=\"\">Efficient use of space is crucial, especially in urban environments. Interior design services ensure that no area is wasted while maintaining comfort and visual appeal.</span></p>\n<h3><strong>3. Time and Cost Efficiency</strong></h3>\n<p><span style=\"\">Design mistakes can be expensive. Hiring professionals reduces the risk of costly errors and ensures smooth project execution within a defined budget.</span></p>\n<h3><strong>4. Access to Quality Resources</strong></h3>\n<p><span style=\"\">Interior designers have access to trusted vendors, premium materials, and custom solutions that elevate the overall design quality.</span></p>\n<h3><strong>5. Stress-Free Execution</strong></h3>\n<p><span style=\"\">Managing a design project can be overwhelming. Professional services handle everything from planning to execution, making the process hassle-free.</span></p>\n<h2><strong>Interior Design Services Offered by Studio Rivet</strong></h2>\n<p><span style=\"\">At </span><strong>Studio Rivet</strong><span style=\"\">, we provide comprehensive and customized interior design services:</span></p>\n<h3><strong>1. Residential Interior Design</strong></h3>\n<p><span style=\"\">We create homes that reflect your personality and lifestyle. From living rooms to bedrooms, every space is designed with comfort and elegance in mind.</span></p>\n<h3><strong>2. Commercial Interior Design</strong></h3>\n<p><span style=\"\">We design offices, retail outlets, and commercial spaces that enhance productivity and leave a lasting impression on clients.</span></p>\n<h3><strong>3. Modular Kitchen Design</strong></h3>\n<p><span style=\"\">Our kitchen designs combine functionality with style, ensuring efficient layouts and modern aesthetics.</span></p>\n<h3><strong>4. Space Planning and Layout Design</strong></h3>\n<p><span style=\"\">We optimize layouts to improve movement, usability, and overall experience within the space.</span></p>\n<h3><strong>5. Custom Furniture and D&eacute;cor</strong></h3>\n<p><span style=\"\">We design unique furniture and d&eacute;cor elements tailored to your specific requirements and design theme.</span></p>\n<h2><strong>Our Design Process at Studio Rivet</strong></h2>\n<p><span style=\"\">A structured approach ensures consistent and high-quality results:</span></p>\n<h3><strong>1. Consultation</strong></h3>\n<p><span style=\"\">We begin by understanding your requirements, preferences, and budget.</span></p>\n<h3><strong>2. Concept Development</strong></h3>\n<p><span style=\"\">Mood boards, layouts, and design concepts are created to visualize the final outcome.</span></p>\n<h3><strong>3. Design Execution</strong></h3>\n<p><span style=\"\">Our team ensures precise implementation with attention to every detail.</span></p>\n<h3><strong>4. Final Styling and Handover</strong></h3>\n<p><span style=\"\">We complete the project with finishing touches that enhance the overall aesthetic.</span></p>\n<h2><strong>Key Elements of Successful Interior Design</strong></h2>\n<p><span style=\"\">To create a well-balanced space, several key elements must be considered:</span></p>\n<h3><strong>1. Lighting</strong></h3>\n<p><span style=\"\">Lighting plays a crucial role in setting the mood and enhancing functionality. A mix of ambient, task, and accent lighting creates depth and dimension.</span></p>\n<h3><strong>2. Color Scheme</strong></h3>\n<p><span style=\"\">The right color palette influences emotions and perception of space. Neutral tones combined with bold accents are widely preferred.</span></p>\n<h3><strong>3. Furniture Selection</strong></h3>\n<p><span style=\"\">Furniture must align with both aesthetics and usability. Proper sizing and placement are essential.</span></p>\n<h3><strong>4. Textures and Materials</strong></h3>\n<p><span style=\"\">Combining different materials such as wood, metal, glass, and fabrics adds visual interest and richness.</span></p>\n<h3><strong>5. Functionality</strong></h3>\n<p><span style=\"\">A beautiful design must also be practical. Every element should serve a purpose.</span></p>\n<h2><strong>Latest Trends in Interior Design Services</strong></h2>\n<p><span style=\"\">Interior design is constantly evolving. Some of the most popular trends include:</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Minimalist and clutter-free interiors</span></li>\n<li style=\"\"><span style=\"\">Sustainable and eco-friendly materials</span></li>\n<li style=\"\"><span style=\"\">Smart home integration and automation</span></li>\n<li style=\"\"><span style=\"\">Open and flexible spaces</span></li>\n<li style=\"\"><span style=\"\">Statement lighting and bold accent pieces</span></li>\n</ul>\n<p><span style=\"\">Studio Rivet integrates these trends while ensuring timeless appeal.</span></p>\n<h2><strong>How Interior Design Services Add Value</strong></h2>\n<p><span style=\"\">Investing in professional interior design services not only improves aesthetics but also increases property value. Well-designed spaces attract better resale value and create a positive impression on visitors or clients.</span></p>\n<p><span style=\"\">For businesses, a thoughtfully designed space enhances brand identity and improves customer experience. For homeowners, it improves comfort, organization, and quality of life.</span></p>\n<h2><strong>Why Studio Rivet Stands Out</strong></h2>\n<p><span style=\"\">Choosing the right interior design partner is essential. Studio Rivet offers:</span></p>\n<ul>\n<li style=\"\"><span style=\"\">Customized design solutions tailored to each client</span></li>\n<li style=\"\"><span style=\"\">Attention to detail in every aspect of design</span></li>\n<li style=\"\"><span style=\"\">High-quality materials and craftsmanship</span></li>\n<li style=\"\"><span style=\"\">Transparent communication throughout the project</span></li>\n<li style=\"\"><span style=\"\">Timely delivery without compromising on quality</span></li>\n</ul>\n<p><span style=\"\">Our goal is to create spaces that are both visually appealing and highly functional.</span></p>\n<p>&nbsp;</p>\n<h2><strong>FAQs &ndash; Interior Design Services</strong></h2>\n<h3><strong>1. What do interior design services include?</strong></h3>\n<p><span style=\"\">Interior design services include space planning, furniture selection, lighting design, color coordination, and overall styling of interiors.</span></p>\n<h3><strong>2. How much do interior design services cost?</strong></h3>\n<p><span style=\"\">Costs vary based on project size, design complexity, and material selection. Studio Rivet offers tailored solutions to fit different budgets.</span></p>\n<h3><strong>3. How long does an interior design project take?</strong></h3>\n<p><span style=\"\">Timelines depend on the scope of work. Smaller projects may take a few weeks, while larger projects can take several months.</span></p>\n<h3><strong>4. Can I customize my design according to my preferences?</strong></h3>\n<p><span style=\"\">Yes, Studio Rivet specializes in personalized designs that align with your vision and requirements.</span></p>\n<h3><strong>5. Why should I choose Studio Rivet?</strong></h3>\n<p><span style=\"\">Studio Rivet provides creative, functional, and high-quality interior design services with a client-focused approach and timely execution.</span></p>\n<h2>&nbsp;</h2>\n<h2><strong>Conclusion</strong></h2>\n<p><span style=\"\">Professional </span><strong>Interior Design Services</strong><span style=\"\"> play a vital role in transforming ordinary spaces into extraordinary environments. Whether you are designing a new space or renovating an existing one, expert guidance ensures the best results.</span></p>\n<p><span style=\"\">With </span><strong><a href=\"https://studiorivet.in/\">Studio Rivet</a></strong><span style=\"\">, you get a perfect blend of creativity, functionality, and professionalism. Let your space reflect your vision with thoughtfully designed interiors that stand the test of time.</span></p>",
        "topics": [],
        "user": {
            "pk": 166591,
            "forum_user": {
                "id": 166354,
                "user": 166591,
                "first_name": "Studio",
                "last_name": "Rivet",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/165b6101aab30457fb81054336e6f95d?s=120&d=retro",
                "biography": "Studio Rivet is a dynamic architecture and design studio driven by a passion for creating meaningful and inspiring spaces. We believe that great design goes beyond aesthetics and should enhance the way people live and interact within a space. Our expertise covers residential, commercial, hospitality, and interior design projects, where we integrate modern design principles with functionality and precision. At Studio Rivet, every project is approached with creativity, strategic planning, and a commitment to delivering high-quality results that stand the test of time.",
                "date_modified": "2026-04-04T11:01:07.755800+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "studiorivet",
            "first_name": "Studio",
            "last_name": "Rivet",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4591,
                    "user": 166591,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "interior-design-services-transform-your-space-with-studio-rivet",
        "pk": 4591,
        "published": false,
        "publish_date": "2026-04-04T11:03:52.384231+02:00"
    },
    {
        "title": "Nightline: Experimental Film and contemporary Music - Caspar de Gelmini",
        "description": "Since 2019 I'm working as a Video Artist in connection to contemporary Music. My Videos were presented in Galeries and Museeums around the world. In my work I combine experimental film (Super 8, 16mm, 35mm, HD, 2K, 4K, 6K) with Sound Art and Music Compositions. I often work together with scientists from different backgrounds. The result are fascinating Video Projections.",
        "content": "<p>Since 2019 I'm working as a Video Artist in connection to contemporary Music. My Videos were presented in Galeries and Museeums around the world. In my work I combine experimental film (Super 8, 16mm, 35mm, HD, 2K, 4K, 6K) with Sound Art and Music Compositions. I often work together with scientists from different backgrounds. The result are fascinating Video Projections.</p>\r\n<p>For the sound I use regulary IRCAM Software like Max and Open Music. For the Visuals I work with analogue and digital Cameras from the 1940s to 2023.&nbsp;</p>\r\n<p>In my work Nightline I show the voyage of my grandmothers mother, who moved in the 1960s from Rotterdam to New York to live in the US. She took the last remaining transatlantic Ship Rotterdam, which I visited last year. The ship is still existing as a Hotel Ship. In other works like Shibuya I tell the story of Robots in Tokyo and the future of artificial Intelligence. Other Videos like \"Life in the future\" deal with nature and science and the climate question.</p>",
        "topics": [
            {
                "id": 1111,
                "name": "experimentalfilm",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 148,
                "name": "Music ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1110,
                "name": "Videoart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18213,
            "forum_user": {
                "id": 18206,
                "user": 18213,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/2019-09_POR.TK_.De-Gelmini-5419ret-700x700_2.jpg",
                "avatar_url": "/media/cache/43/9f/439ff6dab08abf23039108796f8784b0.jpg",
                "biography": "Caspar de Gelmini is a audiovisual Video Artist. He studied Music Composition in Weimar (Bachelor 2013) and Stuttgart (Master 2016) in the Classes of Michael Obst and Marco Stroppa. Afterwards he studied Fine Arts with focus on experimental Film at the Braunschweig University of Art in the class of Michael Brynntrup (Master 2022).\nHis music has been performed by the Bavarian Radio Symphony Orchestra, the Ensemble Intercontemporain and the Ensemble Recherche.\nHis video works have been shown in museums and galleries around the world, e.g. in Europe, Asia, Russia, North America and South America. His focus is on filming scientific topics in connection with experimental music.\nFind out more at: www.caspardegelmini.de",
                "date_modified": "2025-04-04T07:52:57.433634+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "gelmini",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "nightline-experimental-film-and-contemporary-music",
        "pk": 2032,
        "published": true,
        "publish_date": "2023-01-28T13:04:22+01:00"
    },
    {
        "title": "The Power of Sound by Felipe Sanchez Luna (Germany)",
        "description": "KLING KLANG KLONG is a Berlin-based studio redefining sound scenography at the intersection of music, art, science, and technology. Through projects worldwide, they transform sound into a storytelling force—creating immersive experiences where audio takes center stage. In this talk, founder Felipe Sánchez Luna shares the studio’s philosophy and reveals how sound can reshape the way we perceive and connect with the world.",
        "content": "<p></p>\r\n<p>For over a decade, KLING KLANG KLONG has been at the forefront of sound scenography, pushing the boundaries between music, art, science, and technology. Through groundbreaking projects like For Seasons, Fjord &amp; Bertolt, Chasing Waterfall or Light Cloud , the studio has demonstrated how sound can transform spaces and evoke deep emotional connections. Their work spans museums, international fairs, interactive brand experiences, and immersive sound installations&mdash;each designed to make sound more than just a backdrop but a true narrative force.<br />&nbsp;<br />KLING KLANG KLONG approaches sound as a storyteller, shaping emotional experiences where music and sound design CAN take center stage. Rather than merely supporting visuals, sound becomes the main vehicle for storytelling, creating immersive, unforgettable environments.<br />&nbsp;<br />In this talk, founder, creative lead, and managing director Felipe S&aacute;nchez Luna will take you behind the scenes of some of KLING KLANG KLONG&rsquo;s most remarkable projects. He will explore the studio&rsquo;s core philosophy: the power of sound as a primary tool for storytelling. Building on insights from his TED Talk, Felipe will reveal how sound can not only enhance but completely redefine the way we experience the world around us.<img alt=\"Pulse\" src=\"https://forum.ircam.fr/media/uploads/user/582f2533102431d450f4e7e615d0a92a.jpeg\" /></p>\r\n<p><img alt=\"KLING KLANG KLONG \" src=\"https://forum.ircam.fr/media/uploads/user/96807cd02ef87a69a6e3f9c8e7747d14.jpg\" /></p>\r\n<p><img alt=\"Event Horizion - On site\" src=\"https://forum.ircam.fr/media/uploads/user/94e2e4910d6f3d7b2633342938672a5d.jpeg\" /></p>",
        "topics": [
            {
                "id": 3436,
                "name": "sound experiences",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3435,
                "name": "sound scenography",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1867,
                "name": "storytelling",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 149,
                "name": "Technology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4324,
            "forum_user": {
                "id": 4322,
                "user": 4324,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_3160_2_2.jpg",
                "avatar_url": "/media/cache/8a/00/8a000413e1bfe09c4fc4a51243d7af4c.jpg",
                "biography": "Felipe Sánchez Luna, from Bogotá and now based in Berlin, is a pioneer in sound design and interactive experiences. He co-founded kling klang klong, a studio known for its innovative sonic work blending film, music, dance, and technology into immersive soundscapes. With a background in creative coding, Felipe explores the intersection of art and technology, using generative music and intelligent audio engines to turn data into poetic auditory experiences. While highly skilled technically, he remains attuned to the socio-political context of his work, aiming to deepen understanding through sound.\n\nAt kling klang klong, Felipe is both creative and managing director, leading a multidisciplinary team of composers, designers, scientists, and technologists. Their projects span museums, art spaces, virtual worlds, and public events worldwide. Beyond studio work, Felipe shares his insights at major conferences and festivals, including TED Vancouver 2024, TEDx Berlin 2024, KIKK Festival, Hxouse Toronto, Music Tech Germany and AI conference. Despite his achievements, he continues to push boundaries, inspiring audiences to reflect on the role of sound in our lives.",
                "date_modified": "2025-12-18T12:31:42.594348+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "felipesanlu",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-power-of-sound-by-felipe-sanchez-luna-germany",
        "pk": 3735,
        "published": true,
        "publish_date": "2025-10-03T10:28:18+02:00"
    },
    {
        "title": "Zen of Aggression : Composing the Transformational Hybrid by Berk Yagli",
        "description": "The talk/presentation will focus on musical hybridity and critical\r\nmethods developed for the hybridization of electroacoustic and metal\r\nmusic (a part of the presenter's Ph.D. research). These methods are\r\nformed to build a blueprint for hybrid composers (ultimately not only\r\nspecific to metal and electroacoustic but for other genres as well that\r\ncombines spectromorphological focused music with harmony and\r\nrhythm-based music since this type of hybridity requires a unique set\r\nof challenges and complexities). Hybridity in music is a well-\r\nestablished and vibrant contemporary scholarly topic. Even though\r\nthe current literature provides many useful terms and strategies\r\nregarding different types of hybrids, the methods for building hybrids\r\nare rare. The talk will specifically focus on the piece 'Zen of\r\nAggression' and discuss the fruitful and problematic methods when\r\napproaching to compose a transformational hybrid (a hybrid where\r\nboth genres affect each other at the deepest level in which the\r\nresulting hybrid would not sound like each of the genre-hence\r\ntransformed) between electroacoustic music and metal. Different\r\nexcerpts from the piece will be played when discussing the techniques and methodologies. The talk shall also briefly discuss\r\ndifferent types of hybridity (transformational hybrid, eclecticism,\r\npolystylism, and so on), genre and its notions (their problems, their\r\nsignificance, and their contemporary uses), and the notion of fluidity\r\nwhen approaching the hybridity in music to set the context. As\r\ntechnology increases at a rapid rate and postmodern conditions only\r\ngrow stronger as lines get blurry, hybridity has been more than a\r\nprominent musical power to embrace this current condition and\r\nreflect through what is considered music and art in the 21st century\r\n(and challenge the notions of high art and low art).\r\n\r\nLink to the piece:\r\nhttps://drive.google.com/file/d/1DViaPZPRGflbCNwQpVZ_lp1ywA\r\nBXDR3A/view?usp=sharing",
        "content": "<h2></h2>\r\n<h2><img alt=\"Picture: Berk Yagli\" src=\"https://forum.ircam.fr/media/uploads/user/952a647790a1a3e9727a161776e3dde6.jpg\" /><img alt=\"Photo: Bogi Nagy\" src=\"https://forum.ircam.fr/media/uploads/user/12cf5d4744559cdfe1b4f1670981c079.jpg\" /></h2>",
        "topics": [
            {
                "id": 2259,
                "name": "Acousmatic Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 647,
                "name": "Computer music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2260,
                "name": "Hybrid Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2261,
                "name": "Metal Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 85796,
            "forum_user": {
                "id": 85694,
                "user": 85796,
                "first_name": "Berk",
                "last_name": "Yagli",
                "avatar": "https://forum.ircam.fr/media/avatars/Ars_Electronika_Bruckner_NB_2R0A6008_2.jpg",
                "avatar_url": "/media/cache/56/2d/562dcc1b3e13b97f3a99689e801226d0.jpg",
                "biography": "Berk Yağlı (born 1999) is a Cypriot guitarist, composer, and producer. His mission with his music has been to talk about social, political, and philosophical matters interestingly to invite the listeners into reflecting on the topics. He has been active in the UK since 2017. He studied Music and Sound Technology (University of Portsmouth), Masters in Composition (University of Sheffield), and currently at the University of the Arts London working under Adam Stanovic for his Ph.D. topic hybridity between metal and electroacoustic music. His works have been presented internationally including Argentina (Salta), UK (Leicester, Plymouth, Sheffield, London, Staffordshire), US (New York City, Indianapolis, Georgia, Utah, Kansas City, Missouri), Taiwan (Taipei), South Korea (Seoul), Poland (Krakow), Switzerland (Zurich), Ireland (Limerick), Italy (Padova), Mexico (Morelia), Austria (Linz), Australia (Sydney), China (Shenzhen) and more. He is regularly invited to compose in studios around the world. He won numerous awards for his compositions in international music competitions.",
                "date_modified": "2024-10-07T15:06:06.533772+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 941,
                        "forum_user": 85694,
                        "date_start": "2024-09-30",
                        "date_end": "2025-09-30",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "berkyagli",
            "first_name": "Berk",
            "last_name": "Yagli",
            "bookmarks": []
        },
        "slug": "zen-of-aggression-composing-the-transformational-hybrid",
        "pk": 3022,
        "published": true,
        "publish_date": "2024-10-07T15:02:33+02:00"
    },
    {
        "title": "Still Waters Run Deep - Jiayou Wu, Polina Ami Kosele, Teodora Serbanescu",
        "description": "Un voyage transformateur où la découverte immersive de soi fusionne avec des formes abstraites et une conception sonore de pointe. Cette expérience captivante remet en question les normes, osant déstigmatiser et reconsidérer la narration autour des émotions, de la santé mentale et de la neurodiversité.",
        "content": "<p><em><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></em></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute;&nbsp;par :&nbsp;<span>Jiayou Wu, Polina Ami Kosele, Teodora Serbanescu</span><br /><a href=\"https://forum.ircam.fr/profile/polinaamikosele/\">Biographie Polina Ami Kosele<br /></a><a href=\"https://forum.ircam.fr/profile/10037696/\">Biographie Teodora Serbanescu</a></p>\r\n<p><em></em></p>\r\n<p><em>Still Waters Run Deep</em> exploite le lien spirituel et scientifique entre les &eacute;motions et les th&eacute;rapies alternatives par l'eau. Il s'agit d'une histoire interactive sonore, visuelle et acoustique &agrave; 360&deg;. Notre pratique offre un aper&ccedil;u avant-gardiste d'une exp&eacute;rience de d&eacute;couverte de soi, pr&eacute;sent&eacute;e dans un environnement r&eacute;actif &agrave; 360&deg; en trois parties. Les trois espaces principaux, qui sont tous cr&eacute;&eacute;s en collaboration, sont : le hall d'entr&eacute;e, qui est la premi&egrave;re pi&egrave;ce avec laquelle nos utilisateurs interagissent lorsqu'ils entrent dans l'exp&eacute;rience, et deux autres pi&egrave;ces secondaires.</p>\r\n<p>Le projet fusionne des stimuli visuels abstraits avec diverses techniques auditives (par exemple, l'ambisonie, les sons &agrave; fr&eacute;quence sp&eacute;cifique, les bruits de fond, l'audio binaural et les battements binauraux) et cr&eacute;e des visuels audio r&eacute;actifs qui transportent l'utilisateur dans un &eacute;tat de conscience contemplatif.</p>\r\n<p><em>Still Waters Run Deep</em> vise &agrave; d&eacute;stigmatiser et &agrave; repenser la narration autour des &eacute;motions, de la sant&eacute; mentale et de la neurodiversit&eacute;. C'est ce qui motive notre choix de cr&eacute;er un espace individuel, sans pr&eacute;jug&eacute;s, qui accorde une grande importance au parcours personnel de chaque individu tout en pr&eacute;servant un sens de la communaut&eacute;.</p>\r\n<p>Les caract&eacute;ristiques de chaque pi&egrave;ce sont les suivantes : le hall d'entr&eacute;e est une exp&eacute;rience audio ambisonique de 5 &agrave; 8 minutes bas&eacute;e sur une lecture en boucle de 60 bpm et 432 Hz. La premi&egrave;re salle pr&eacute;sente une &oelig;uvre informatis&eacute;e de particules d'eau qui r&eacute;agit &agrave; des battements binauraux sp&eacute;cifiques, bas&eacute;s sur une fr&eacute;quence de 30 Hz avec un battement r&eacute;gulier, et comporte une voix m&eacute;ditative guidante, enregistr&eacute;e au RCA Sound Lab avec un microphone binaural. La deuxi&egrave;me salle pr&eacute;sente un enregistrement de champ de flottement d'eau qui &eacute;mule le son d'une chute d'eau au rythme lent. Les trois salles sont construites sur une approche informatis&eacute;e des m&eacute;thodes th&eacute;rapeutiques conventionnelles, &agrave; l'aide de logiciels de technologie moderne tels que Unreal Engine, les battements binauraux et les synth&eacute;tiseurs de sons.</p>\r\n<p>Cette exp&eacute;rience &agrave; 360&deg; est con&ccedil;ue comme un atout cr&eacute;atif alternatif, les pratiques th&eacute;rapeutiques professionnelles se situant tout en bas de notre pyramide de recherche. Soucieux de nous assurer que notre projet repose sur une base solide, tant sur le plan cr&eacute;atif que scientifique, nous avons consult&eacute; des conseillers en cr&eacute;dit sonore et artistique, dont la pratique a inspir&eacute; les principales caract&eacute;ristiques de l'intrigue finale, telles que les publics de niche, le tempo, la hauteur, la fr&eacute;quence et la vibration des sons.</p>\r\n<p></p>\r\n<p><strong>Noms des collaborateurs :<span>&nbsp;</span></strong>Jiayou Wu, Polina Ami Kosele, Teodora Serbanescu</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 65514,
            "forum_user": {
                "id": 65444,
                "user": 65514,
                "first_name": "Polina Ami",
                "last_name": "Kosele",
                "avatar": "https://forum.ircam.fr/media/avatars/headshot.jpg",
                "avatar_url": "/media/cache/02/2f/022f2d3daad13479f75706981e815c3a.jpg",
                "biography": "Polina is an accomplished creative with over 5 years of experience in multimedia production. Driven by a passion for art, storytelling, and audience engagement, her ambition is to transcend traditional frameworks by crafting deeply impactful narratives and tapping into the power of emotions to deliver meaningful visuals that enhance digital experiences.\n\nHer expertise encompasses digital design, end-to-end video production, photography, and most recently, interactive experience design. Polina’s work has been showcased at the Riga and Jurmala Art Fairs 2018 and has notably earned recognition with 5 International Film Festival Awards (2020-22), as well as features in magazines across print and digital platforms.\n\nIn her current practice at the Royal College of Art, Polina is exploring creative direction and the convergence of immersive technologies with storytelling. Her upcoming project 'Rat Rule' delves into the world of human-rat relationships and animal research, criticising the ongoing issues in legislation oversight through a 3D animated Virtual Reality Film.",
                "date_modified": "2024-03-02T20:47:02.359520+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "polinaamikosele",
            "first_name": "Polina Ami",
            "last_name": "Kosele",
            "bookmarks": []
        },
        "slug": "still-waters-run-deep-2",
        "pk": 2782,
        "published": true,
        "publish_date": "2024-03-01T23:47:04+01:00"
    },
    {
        "title": "\"SongEval: A Benchmark Dataset for Song Aesthetics Evaluation\" by Huixin Xue",
        "description": "Aesthetics serve as an implicit and important criterion in song generation tasks that reflect human perception beyond objective metrics. However, evaluating the aesthetics of generated songs remains a fundamental challenge, as the appreciation of music is highly subjective. Existing evaluation metrics, such as embedding-based distances, are limited in reflecting the subjective and perceptual aspects that define musical appeal. To address this issue, we introduce SongEval, the first open-source, large-scale benchmark dataset for evaluating the aesthetics of full-length songs.",
        "content": "<h5 id=\"➡️-this-presentation-is-part-of-ircam-forum-workshops-paris-engh\"><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<p>SongEval includes over 2,399 songs in full length, summing up to more than 140 hours, with aesthetic ratings from 16 professional annotators with musical backgrounds. Each song is evaluated across five key dimensions: overall coherence, memorability, naturalness of vocal breathing and phrasing, clarity of song structure, and overall musicality. The dataset covers both English and Chinese songs, spanning nine mainstream genres. Moreover, to assess the effectiveness of song aesthetic evaluation, we conduct experiments using SongEval to predict aesthetic scores and demonstrate better performance than existing objective evaluation metrics in predicting human-perceived musical quality, We provide the dataset and toolkit for song aesthetic evaluation at<span>&nbsp;</span><a href=\"https://huggingface.co/datasets/ASLP-lab/SongEval\">https://huggingface.co/datasets/ASLP-lab/SongEval</a>&nbsp;and<span>&nbsp;</span><a href=\"https://github.com/ASLP-lab/SongEval\">https://github.com/ASLP-lab/SongEval</a>&nbsp;&nbsp;</p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/huixin_xue.png\" alt=\"\" width=\"682\" height=\"382\" /></p>",
        "topics": [],
        "user": {
            "pk": 138732,
            "forum_user": {
                "id": 138552,
                "user": 138732,
                "first_name": "Huixin",
                "last_name": "Xue",
                "avatar": "https://forum.ircam.fr/media/avatars/%E8%96%9B%E8%95%99%E5%BF%83%E7%85%A7%E7%89%871_XGVG8AQ.jpg",
                "avatar_url": "/media/cache/4a/5b/4a5ba8b899cbd8627187acf88a763256.jpg",
                "biography": "Huixin Xue is a Chinese composer, music producer and Music AI researcher. She is a Ph.D. candidate in Music AI at Shanghai Conservatory of Music under the supervision of Professor Liu Hao, an exchange student at the Hamburg University of Music and Theatre. She graduated from the Music Engineering Department of Shanghai Conservatory of Music both for her bachelor's and master's degrees both as the top of her major. \nHer pieces won numerous awards, including The Honorable Mention of the 2024 Sound Chain International Electronic Music Composition Competition (the only Chinese winner among the 6 winners worldwide). Her work was presented at the 2025 ICMC. Her pieces have been performed at major festivals. She also has participated in over twenty commercial music creation projects.\nDuring her doctoral studies, she participated in the development of the AI Music Therapy Pod at the Shanghai Conservatory of Music, co-developed SongEval, the first aesthetic evaluation dataset for AI-generated songs, and contributed to organizing the Automatic Song Aesthetic Evaluation Challenge at ICASSP 2026.",
                "date_modified": "2026-03-03T23:05:36.180036+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "xuexue1",
            "first_name": "Huixin",
            "last_name": "Xue",
            "bookmarks": []
        },
        "slug": "songeval-a-benchmark-dataset-for-song-aesthetics-evaluation-by-huixin-xue",
        "pk": 4107,
        "published": true,
        "publish_date": "2025-12-23T10:49:47+01:00"
    },
    {
        "title": "Les Thermophones, récentes évolutions et spatialisation",
        "description": "Communication pour le Forum de l’Ircam à Montréal (IRCAM Forum Workshop in Montreal) février 2021.\r\nhttps://jacques-remus.fr/actualites/forum-ircam-montreal-2021/",
        "content": "<div class=\"page\" title=\"Page 1\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Communication pour le Forum de l&rsquo;Ircam à Montréal (</span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(20.784310%, 20.784310%, 20.784310%);\">IRCAM Forum Workshop in Montreal) </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">février 2021 </span></p>\r\n<p><span style=\"font-size: 16.000000pt; font-family: 'Times'; font-weight: bold;\">Les Thermophones, récentes évolutions et spatialisation </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 16.000000pt; font-family: 'Helvetica';\">Jacques Rémus </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-style: oblique;\">( Ipotam Mécamusique ) </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 100.000000%);\">jacques.remus@mkz.fr </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">1) Introduction </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Bonjour, et bien je suis très heureux de pouvoir vous parler de mon travail avec les Thermophones, des évolutions de ce travail et puis, plus spécialement du rapport avec la spatialisation. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Je me présente : je suis un artiste indépendant travaillant à la fois la composition musicale et la réalisation de sculptures sonores ou machines musicales. Mon atelier est à Paris et je dirige une compagnie de spectacles qui produit et diffuse mes oeuvres. Actuellement &laquo; NON ESSENTIEL &raquo; !. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Je me suis passionné pour le son des phénomènes thermo-acoustiques depuis une trentaine d&rsquo;années. La nature de ces sons puissants et très particuliers liée à la possibilité de générer des musiques avec des tuyaux mobiles sans soufflerie m&rsquo;a fait pas mal rêver. Je n&rsquo;ai cependant commencé à passer à la réalisation que depuis une douzaine d&rsquo;années avec un premier spectacle de pyrophones automatisés en 2007. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Depuis j&rsquo;ai fait de nombreux essais et expériences, aidé par des collaborations avec des équipes de recherches au USA et en France avec le CNRS. J&rsquo;ai réalisé en particulier un ensemble expérimental d&rsquo;une quarantaine de Thermophones avec lequel j&rsquo;ai fait quelques concerts et installations. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Une présentation a été faite au Forum de l&rsquo;Ircam en novembre 2015.. </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">2) Naissance de la Thermoacoustique </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les souffleurs de verre : depuis la haute antiquité ils ont remarqué que parfois leurs tubes émettaient des sifflements stridents très puissants, c&rsquo;est sans doute le phénomène le plus ancien de ce genre ainsi que la pratique dans les temples </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 2\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">japonais shintoïstes, avec des récipients chauffés contenant du riz et qui émettent un long son thermoacoustique . Ils sont utilisés par les prêtres dans des rituels divinatoires. Cela s&rsquo;appelle le Kibitsu No Kama.<br />C&rsquo;est au 19ème siècle que des expérimentateurs ont découvert divers formes de ce phénomène dont les plus connues étaient les sons qu&rsquo;entendaient les allumeurs de réverbère en manipulant les lampes. Des tubes de Rijke, de Sondhauss, de Hopfler, de Taconis, voient le jour sans qu&rsquo;il y ait d&rsquo;explications sauf plus tard celles d&rsquo;un physicien anglais Rayleigh puis Rott, ni d&rsquo;application, sauf un timide moteur Stirling mais aussi d&rsquo;un musicien et scientifique alsacien, Frederic Kastner, qui construisit un &laquo; orgue à gaz &raquo; en 3 exemplaire dont un que j&rsquo;ai pu voir et que l&rsquo;on a pu remettre en fonctionnement dans les années 60. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">FILM<br />Puis ce fut l&rsquo;oubli. Il y eût des ouvrages rassemblant ces expérimentations dont celui d&rsquo;Henri Bouasse. Vers 1930, allemands et français, indépendamment mettent au point l&rsquo;ancêtre de nos réacteurs d&rsquo;avion, le pulsoréacteur, tube où des injections d&rsquo;essence explosent au rythme de la fréquence acoustique fondamentale du tube. Les français abandonnèrent, les allemands en firent les vecteurs de leurs premières bombes volantes, les V1. Les américains récupérèrent ces engins et après divers pérégrinations naquit la science thermoaoustique à Los Alamos, berceau de la bombe atomique. On eut l&rsquo;idée d&rsquo;inverser le phénomène et l&rsquo;utilisation fut singulièrement avant tout la cryogénie : du son dans un fluide à haute pression crée à l&rsquo;autre extrémité du froid ! Ceci permit de s&rsquo;approcher du zero absolu et l&rsquo;autre utilisation fut la thermoréfrigération des vaisseaux spatiaux, sans moteurs et sans gaz dangereux. Depuis sont nés de nombreux laboratoires, maintenant surtout en Chine et au Japon qui travaillent la question. Plusieurs artistes ont développé des orgue à flamme, je citerais deux amis, Trimpin au USA et Michel Moglia en France, sans parler des essais avec énergie solaire et paraboles. De mon coté j&rsquo;ai cherché à aller dans une autre direction : les Thermophones électriques. </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">3) Les principes théoriques </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Bien sûr je ne vais pas faire un cours sur la thermoacoustique ou la mécanique des fluides car j&rsquo;en suis bien incapable.<br />La thermoacoustique est une discipline relativement jeune au carrefour de la thermodynamique, de la thermique et de l'acoustique. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">En simplifiant considérons ce qui se passe dans un tube où l&rsquo;on a placé, à peu près au 1/4 de sa longueur un manchon poreux, (le stack) formé théoriquement de &laquo; plaques &raquo; empilées et en pratique de micro-tubes céramiques ou grilles métalliques. Si l&rsquo;on crée une forte chaleur à une extrémité du stack, générant une différence de température de plusieurs centaines de degrés avec l&rsquo;autre extrémité, il se crée, au bout d&rsquo;un certain temps une instabilité due à un déphasage entre la vitesse des déplacement des molécules de gaz et leur pression . Les molécules font </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">2 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 3\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">des va et vient entre les deux pôles de températures. Ceci se produit dans l'épaisseur de couche limite acoustique le long des parois des &laquo; plaques &raquo;. Une onde stationnaire se déclenche acoustiquement alors dans le tube , transformant l&rsquo;énergie calorique en son. C&rsquo;est l&rsquo;instabilité thermo-acoustique. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">De nombreux sites donnent les formules de ce phénomènes , mais je vais vous montrer un petit film fait par le labo de Steve Garret .<br />FILMS science films Garret,<br />Et si vous voulez savoir comment faire fonctionner un petit Thermophone dans votre cuisine ou même sur votre bureau regardez ce merveilleux petit film fait par Thibault Combe au laboratoire Laum au Mans (voir référence en fin de texte) </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">film Combe Laum, </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">4) Particularités et caractéristiques des sons des Thermophones </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">C&rsquo;est maintenant que j&rsquo;aborde un problème que tout le monde connait et n&rsquo;a rien d&rsquo;original, mais qui, ici, me handicape énormément : le rendu sur haut-parleurs des sons de Thermophones est très éloigné du rendu réel. Plusieurs ingénieurs du son, venu pour la télévision ou un tournage vidéo ont été très déçus des rendus enregistrés pourtant avec du bon matériel, par rapport à ce qu&rsquo;ils avaient vécu . </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">C&rsquo;est pourquoi je tenais beaucoup à venir avec quelques tuyaux à Montréal si la conférence avait pu se tenir en &laquo; prrrésentiel &raquo;!! </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Tout cela n&rsquo;est pas facile à reproduire sur des haut-parleurs, à fortiori ceux d&rsquo;un laptop ! Un écoute au casque améliore cependant le rendu, donc si vous pouvez avoir cela maintenant c&rsquo;est mieux </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Je vais tout d&rsquo;abord vous présenter un son de pyrophone<br />Pyro- du grec pûr c&rsquo;est le feu ...!<br />FILM<br />Le son se déclenche quelques secondes après la chauffe au rouge d&rsquo;une grille et se compose essentiellement de la fondamentale et ici de deux partiels. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Pour les Thermophones , therme plus générique que les Pyrophones mais dont le mot signifie aussi un système de téléphonie datant de 1881, j&rsquo;ai été amené à faire de nombreuses expérimentations mais sans les bancs de mesure et moyens d&rsquo;un laboratoire. J&rsquo;ai constitué une sorte de banc de mesure à moi , très bricolé mais très fiable. C&rsquo;est sur ces données que, avec les équipe du Limsi-CNRS ( en particulier Diana Baltéan-Carlès, Catherine Weisman et Christophe d&rsquo;Alexandro)) nous avons fait une première publication sur 3 expérimentations ; depuis j&rsquo;en ai enregistré une centaine qui sont en cours d&rsquo;exploitation. Le but est d&rsquo;une part d&rsquo;approfondir la compréhension du phénomène et de trouver les meilleures solutions de dimensions des tuyaux et surtout des caractéristiques des &laquo;stacks&raquo; pour arriver à un instrumentarium facile à jouer et à construire . Or ce n&rsquo;est pas facile ! </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">3 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 4\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Je vous montre quelques exemples de mesures .<br />Les notions de décibels ont été mal maitrisées au niveau des mesures mais avec de simples appareils portatifs nous avons fait des mesures de 95 à 105 db à 1M mais jusqu&rsquo;à 135 dB-C en sortie de tuyau, autrement dit les mesures se faisaient avec bouchons d&rsquo;oreille et casque antibruit !<br />FILMS </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">4) réalités et recherches </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; font-weight: bold;\">4-1)Caractéristiques liées à la spatialisation </span></p>\r\n<p><span style=\"font-size: 18.000000pt; font-family: 'Helvetica';\">* </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Le son d&rsquo;un seul tuyau a pour caractéristiques: </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">- d&rsquo;être perçu à intensités différentes suivant que l&rsquo;endroit où l&rsquo;on se place mais pas du tout en s&rsquo;en éloignant </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">- d&rsquo;être assez pénible à entendre sur la durée </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les raisons : c&rsquo;est un son quasiment sinusoïdal, donc entêtant sur la durée d&rsquo;autant plus qu&rsquo;il sort à très haute intensité sonore (on a parfois &lsquo;impression d&rsquo;avaler ses propres paroles lorsque l&rsquo;ion essaie de parler ) . Par ailleurs il s&rsquo;établit des zones de pressions acoustiques sous forme de grandes vagues d&rsquo;ondes qui diminuent quand on s&rsquo;éloigne et qui s&rsquo;intensifient à nouveau lorsqu&rsquo;ion s&rsquo;éloigne un peu plus !, puis s&rsquo;atténuent, puis remontent etc. et se réfléchissent aussi sur les murs créant de curieuses sensations. </span></p>\r\n<p><span style=\"font-size: 18.000000pt; font-family: 'Helvetica';\">* </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Le son de plusieurs tuyaux a lui d&rsquo;autres caractéristiques : construits sur des harmoniques des notes basses on obtient des sons plus riches et plus supportables. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Mais surtout les tuyaux qui sont à l&rsquo;unisson, à la quinte, à la quarte ou même qui sont séparés par un demi-ton vont interférer les uns sur les autres et les hauteurs variant légèrement avec la chaleur il va se produite des battements.<br />Ces battements sont plutôt intéressants musicalement et aussi physiquement. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Si l&rsquo;on fait jouer des tuyaux qui émettent des notes autour de 100Hz et bien sûr en dessous, les battements sont à la fois puissants et quasi inaudibles et ce sont les membranes musculaires qui les font ressentir car ils entrent dans les domaines des infra-sons sans l&rsquo;on entende leurs partiels (si c&rsquo;es par les os on est mal , si c&rsquo;est par les poumons on est mort) </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les auditeurs ressortent souvent interloqués des petits concerts ou démonstrations que j&rsquo;ai fait<br />Je les invite donc souvent à se déplacer dans l&rsquo;installation car les zones de pressions acoustique deviennent alors très changeantes et les sons perçus ou ressentis sont pour le moins inhabituels. </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">4 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 5\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-2) </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; font-weight: bold;\">Autres caractéristiques et particularités </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les Thermophones sont ici des tuyaux (de 10cm à 3m de longueur) munis d&rsquo;un échangeur de chaleur utilisant de hautes températures (200 à 800&deg;C) à ne pas confondre avec les Pyrophones qui fonctionnent avec des brûleurs à gaz et la nécessité de la convection. Les deux sont cependant liés à des lois physiques semblables venant de la science thermo-acoustique. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les Thermophones rassemblés en une sorte de jeu d&rsquo;orgue (pour le moment une quarantaine de tuyaux de 60 à 600 Hz) ont plusieurs caractéristiques assez originales que je peux résumer pour le moment à ces cinq points : </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-2-1- nature acoustique : émission de son où la fréquence fondamentale est très dominante et dont le volume émis est puissant (95 à 105 db à 1m). Cette puissance peut être modulée par des volets aux extrémités des tuyaux. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-2-2 - positionnement : émission de sons à partir de simples tuyaux (en acier , aluminium ou verre) qui, reliés sans soufflerie par un fil électrique, peuvent se placer indépendamment les uns des autres donc s&rsquo;adapter à l&rsquo;acoustique d&rsquo;un lieu et permettre une écoute naturellement spatialisée. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-2-3 - latence importante avant le déclenchement du son (4 à 40 secondes) si on déclenche l&rsquo;instrument par chauffe de l&rsquo;échangeur de chaleur à partir de la température ambiante mais presque instantanée si on opère mécaniquement sur un tuyau où l&rsquo;échangeur est déjà chauffé. De même, la latence du son après l&rsquo;arrêt de la chauffe est importante et peut être maîtrisée mécaniquement. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-2-4 - variation fine de la fréquence émise donc de la note de chaque tuyau en fonction des variations de températures à l&rsquo;intérieur des tuyaux. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-3) </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; font-weight: bold;\">Recherche technique et acoustique en cours: </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-3-1) Les échangeurs de chaleur ou &laquo; stacks &raquo; peuvent être de natures très variées. Des recherches diverses ont été réalisées et sont faites dans les laboratoires scientifiques et nous en avons aussi développés de notre coté, mais les paramètres d&rsquo;efficacité acoustique, de coût, de fiabilité, de sécurité et de commodité ne sont pas encre réunis sur un modèle standard et sont à maîtriser pour l&rsquo;avenir. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Problèmes des stacks : ils doivent être perméables au flux d&rsquo;air, la résistance ne doit pas non plus avoir un volume qui le freine donc:soit systèmes fait maison, (travail d&rsquo;orfévrerie et très dangereux, soit systèmes industriels non adaptés , couteux , encombrants mais sécurisés . </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">5 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 6\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-3-2) L&rsquo;accord de base et l&rsquo;accord fin des tuyaux se fait actuellement sur les principes des tuyaux en bois des orgues classiques mais peut se faire par d&rsquo;autres techniques utilisant des glissières avec des graisses haute température. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-3-3) Des supports de tuyaux permettant de les placer et déplacer facilement et permettant leur manipulation (jeux avec des gants, inclinaison) ont été développés et utilisés mais ce ne sont que des prototypes et leur amélioration est en cours. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-3-4) La mécanisation permettant de contrôler les attaques, les durées et les intensités, reste un gros chantier ainsi que le contrôle des températures. Des prototypes sont à l&rsquo;étude et la généralisation de ces systèmes, permettant un jeu sur &laquo; clavier &raquo; ou à réaction robotique directe demandera des financements complémentaires pour un jeu complet de tuyaux. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">4-3-5) L&rsquo;objectif de ces recherches techniques est de mettre au point des modèles de Thermophones qui seront musicalement plus avancés que les prototypes actuels et surtout qui seront facilement reproductibles. </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">5) Spatialisation </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les Thermophones se prêtent facilement à des installations où le public va être immergé physiquement dans de multiples sources sonores naturelles.<br />Les Thermophones peuvent se placer indépendamment les uns des autres et donc s&rsquo;adapter à l&rsquo;acoustique d&rsquo;un lieu et permettre une écoute naturellement spatialisée. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Techniquement chaque tuyaux n&rsquo;a besoin que d&rsquo;un support et d&rsquo;un câble. Pas de soufflerie et l&rsquo;on peut même munir les tuyaux d&rsquo;un mini-parapluie pour des installations en extérieur. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Suivant la sophistication de ce qui est demandé au Th, il sera simplement mis en chauffe distance , ou bien joué avec plus de nuances s&rsquo;il est manipulé par un musicien ou bien encore automatiquement s&rsquo;il est muni de clapets manoeuvrés par moteurs pas à pas et percuteurs actionnés par électro-aimants. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Il faut aussi considérer le curieux phénomène des changements d&rsquo;intensité (variations de pressions acoustiques) qui diminuent puis ré-augmentent, etc. comme des longues vagues immobiles lorsque l&rsquo;on s&rsquo;éloigne des sources. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">De plus le phénomène des battements qui produisent un toisième son puissant et </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">6 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 7\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">parfois trop grave pour être perçu uniquement par les oreilles donne des sons et des sensations qui varient suivant l&rsquo;endroit où l&rsquo;on se trouve </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">C&rsquo;est pourquoi , lors de chaque présentation je propose au public de circuler lentement dans les installations. </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">6) Recherche artistique : écriture et construction </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">L&rsquo;écriture musicale sur ces instruments est pour le moment de deux types: </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">- connaissant les latences avant le déclenchement du son (L.A.S.) et son arrêt après coupure (L.S.A.C.) une écriture sur séquenceur permet de prévoir et d&rsquo;ajuster le démarrage et la fin des sons. Une écriture basée sur des délais directement sur un logiciel comme MAX est aussi relativement aisée. Par ailleurs, le pilotage par ordinateur se fait directement avec les protocoles Midi et surtout DMX. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">- la manipulation de chaque tuyau avec des gants par un exécutant ou un improvisateur permet de jouer des Thermophones avec nuance et précision. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les premiers essais de Thermophones développés à partir d&rsquo;un prototype envoyé par Steve Garret (Penn State University) m&rsquo;ont tout de suite fait envisager des créations à partir de ce type d&rsquo;instruments qui s&rsquo;est révélé beaucoup plus intéressant que les pyrophones que j&rsquo;avais pratiqué (créations en 2007, 2008, 2009). </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">En effet la nature très particulière de ces sons, les possibilités de les maitriser, de les moduler m&rsquo;ont ouverts divers pistes que j&rsquo;ai à peine commencé à explorer et que d&rsquo;autres personnes m&rsquo;ont aussi suggéré. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">6-1) Partition pour musiciens ou exécutants maniant les Thermophones manuellement </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">L&rsquo;intervention des mains sur les extrémités des tuyaux permet des modulations en volume et en attaque qui donnent la sensation de &laquo; pétrir &raquo; le son.<br />J&rsquo;ai donc expérimenté des improvisarions avec 1 ou 2 tuyaux par participants, les tuyaux étant en chauffe constante. Une réalisation a été faite lors d&rsquo;un concert avec la Cie Décor Sonore : 2 jeux de 4 Thermophones étaient manipulés aux deux extrémités du tunnel de la rue Watt (75013 Paris), et ont joué toute une soirée avec plusieurs musiciens qui déambulaient dans le tunnel, appuyés par une diffusion électroacoustique de Michel Risse. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">J&rsquo;envisage donc de développer des créations, donc des écritures pour ce type de </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">7 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 8\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">musique, avec la participation d&rsquo;autres musiciens.<br />Par ailleurs les structures portantes des Thermophones permettent d&rsquo;incliner les tuyaux sur presque 360&deg;. Ceci provoque aussi des variations dans le son.<br />Le geste des manipulateurs les saisissant et toute leur gestuelle pour déclencher et moduler le son m&rsquo;amène aussi à une réflexion sur un aspect plus ou moins chorégraphique de ces actions. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">6-2) Jeux automatisés avec percussions bois et carillon tubulaire </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Sans manipulateurs j&rsquo;ai expérimenté des mini-concerts lors de la Biennale du Mans (décembre 2019) avec une quarantaine de Thermophones, relativement accordés et des percussions mécanisées sur wood-blocks. Le contraste entre les nappes de sons longs des Thermophones et les percussions sèches et rythmées m&rsquo;a permis d&rsquo;ouvrir l&rsquo;écoute des Thermophones à un autre niveau que mes précédentes installations de &laquo; tuyaux aux sons étranges &raquo;. Je veux développer cette matière et l&rsquo;étape suivante sera d&rsquo;y mêler un carillon tubulaire de 40 notes, à baguettes lumineuses, que j&rsquo;ai construit il y a quelques années. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Ceci ouvre des perspectives d&rsquo;installation et de concerts automatiques, les wood- block (appelés Pic-Verts) et les Carillons (Carillons_N&deg;3 créés en 2001) pouvant eux aussi êtres placés dans l&rsquo;espace sans autre contrainte qu&rsquo;un fil électrique. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">6-3) Jeux avec instruments acoustiques </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Indépendamment des manipulations une des pistes que je veux explorer est le son des Thermophones avec des instruments de musique acoustiques.<br />Un projet avec un instrument acoustique monumental tel qu&rsquo;un orgue baroque ou romantique peut aussi s&rsquo;envisager. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Les instruments doivent rester acoustiques car la magie des Thermophones sans mécanismes visuels disparait dés que la musique est aussi produite par des haut- parleurs : le public pense que le son provient uniquement des HP et que les tuyaux ne sont là que pour le décor. Ceci limite les possibilités de passer au Stade de France, mais les possibilités d&rsquo;écriture musicale Thermophones/Intruments acoustique restent pour moi une des bases les plus importante à développer. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">6-4) Thermophones et voix </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">La rencontre avec plusieurs chanteurs, choristes et chefs de ch&oelig;ur m&rsquo;a fait découvrir leur intérêt à vouloir exercer leur art en conjugaison avec les Thermophones et cela reste pour moi un domaine à explorer aussi passionnant qu&rsquo;avec les instruments acoustiques . La capacité des Thermophones à tenir des basses continues &laquo; vivantes &raquo; n&rsquo;est pas pour rien dans cette attirance. J&rsquo;ai effectué quelques simulations avec des enregistrements de chants bulgares, mais ce net sont que des pistes à explorer. </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">8 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 9\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Par ailleurs, au seuil de l&rsquo;instabilité thermoacoustique les Thermophones peuvent se déclencher à la voix ! Ce qui ouvre de nombreuses possibilités d&rsquo;instllations ou de jeux.. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">6-5) Reprise des sons par microphones </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">A contrario de ce que j&rsquo;ai affirmé plus haut sur l&rsquo;incompatibilité de la présence de haut-parleurs avec les Thermophones, j&rsquo;ai pensé que, au contraire, la présence affirmée de capteurs sur les tuyaux pouvait permettre des développements musicaux très intéressants. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">J&rsquo;imagine les Thermophones dispersés de telle manière que le public soit entouré ou dans une sorte de forêt et que leurs chants se trouve par moments transposés dans des sons très différents mais très liés à leur origine, diffusés par une installation électroacoustique. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">J&rsquo;imagine aussi que la direction de ces multiples productions sonores qui vont former une deuxième strate musicale soit pilotée par un ou des &laquo;chefs d&rsquo;orchestres &raquo; munis de systèmes type Camera Musicale.<br />Bien sûr les capteurs pourront être utilisés aussi pour générer des &laquo; vagues &raquo; vidéo à partir des ondes sinusoïdales des tuyaux, donc un travail avec un(e) vidéaste. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">6-6) Extérieur </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">La puissance des Thermophones permet d&rsquo;envisager aussi ces diverses formes de jeux musicaux en extérieur, en prenant la précaution de sécuriser les orifices contre les intempéries. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Enfin les perspectives de jeux et d &lsquo;écriture sur de futurs Thermophones robotisés (répondant sans latence à un clavier ou un ordinateur comme un jeu d&rsquo;orgue et avec nuances d&rsquo;intensité): toutes les pistes précédentes autres qu&rsquo;avec manipulation pourront bien sûr être exploitées mais un tel instrumentarium ouvre de nombreuses autres possibilités en particulier aussi avec des procédés utilisant des systèmes de commande comme la Camera musicale. </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">8) Quelques exemples de concerts et installations </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">FILMS et Photos </span></p>\r\n<p><span style=\"font-size: 14.000000pt; font-family: 'Helvetica'; font-weight: bold;\">9) Conclusion </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">J&rsquo;ai essayé de vous décrire à la fois la nature de la matière sonore des </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">9 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 10\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Thermophones, pour ceux qui ne la connaissaient pas et les stades où j&rsquo;en suis dans cette aventure. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Techniquement le cap à franchir est d&rsquo;une part la réalisation d&rsquo;échangeurs de chaleur, des &laquo; stacks &raquo; performants et faciles à reproduire et d&rsquo;autre part la télécommande ou robotisation des attaques et variation de volume des tuyaux. Un des objectifs est de pouvoir donner les indications pour que des ensembles de thermophones puissent être s-construits un peu partout où on envie de jouer avec ces singuliers tuyaux. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Musicalement il s&rsquo;agit de ne pas faire fuir le public (!), mais de continuer à l&rsquo;enchanter et aussi d&rsquo;expérimenter les nombreuses combinaisons d&rsquo;écriture et d&rsquo;improvisation qui sont possibles avec des musiciens et d&rsquo;autres sources sonores. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Au niveau de la spatialisation, et c&rsquo;est la frustration de cette communication en distanciel (!), j&rsquo;espère que vous avez pu comprendre que c&rsquo;est un atout majeur des Thermophones. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Et donc si vous voulez participer à l&rsquo;aventure nouvelle des Thermophones, </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 91.372550%);\">contactez moi</span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">. (https://jacques-remus.fr/contact/) </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 100.000000%);\">https://jacques-remus.fr ou https://mkz.fr </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Videos (copiez-collez les liens si pas de réponse au clic) </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">2016 : DIY Steel wool thermoacoustic engine </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 91.372550%);\">Institut d'Acoustique Graduate School - IAGSLeMans </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">https://www.youtube.com/watch?v=owbjLWrC86g<br />Build your own standing wave thermoacoustic engine using only a test tube and some steel wool.<br />Listen to the sound generated by the mean of a temperature gradient through a porous material. </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">2019 : Thermophones Jacques Rémus Biennale Le Mans Sonore 3&rsquo;34 &ldquo; </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 91.372550%);\">https://vimeo.com/391952984 </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Reportage journalisme scientifique : les Thermophones<br />2017 Reportage Julie Desriac et Léa Nanni, étudiantes en Journalisme scientifique, Université Paris VII (Diderot), 8&rsquo; 40&ldquo; </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">10 </span></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 11\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 100.000000%);\">https://www.youtube.com/watch?v=wowc2FyUvrI </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Thermophones Curiositas2017<br />Les Thermophones, film présenté au Festival Curiositas 2017 montrant avec démonstration d'un prototype, le travail de recherche et de création artistique entre les équipe du CNRS et Jacques Rémus. 10&rsquo;58 &ldquo;<br /></span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 100.000000%);\">https://www.youtube.com/watch?v=THPajH4X-Dw </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">Thermophones 2 3&rsquo;49&ldquo;<br />2017 Thermophones Présentation au festival Curiositas (CNRS) et ateliers des Frigos. </span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 100.000000%);\">https://www.youtube.com/watch?v=C1uCXFMmeEU </span></p>\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Helvetica';\">La Gestothèque : Jacques Rémus, outils, méthodes et matériaux 1 : La Caméra musicale 1&rsquo;56&ldquo;<br /></span><span style=\"font-size: 12.000000pt; font-family: 'Helvetica'; color: rgb(0.000000%, 0.000000%, 91.372550%);\">https://www.youtube.com/watch?v=x9ZQS-Zgv9I </span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span style=\"font-size: 12.000000pt; font-family: 'Cambria';\">11 </span></p>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 134,
                "name": "Audiosculpt",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 643,
                "name": "Ps2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 640,
                "name": "Thermophones",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 627,
            "forum_user": {
                "id": 627,
                "user": 627,
                "first_name": "Jacques",
                "last_name": "Rémus",
                "avatar": "https://forum.ircam.fr/media/avatars/Jacques_Remus_photo_Marine_Lale_600x600_DSC_7184.png",
                "avatar_url": "/media/cache/87/d5/87d5a3210f1b68fa331488c355189592.jpg",
                "biography": "Jacques Rémus\n\nBiologiste à l'origine (agronome et chercheur en aquaculture), Jacques Rémus a choisi à la fin des années 70, de se consacrer à la musique et à l'exploration de différentes formes de création. Saxophoniste, il a participé à la fondation du groupe Urban-Sax. Il apparaît également dans de nombreux concerts allant de la musique expérimentale (Alan Sylva, Steve Lacy) à la musique de rue (Bread and Puppet). \n\nAprès des études en Conservatoires, G.R.M. et G.M.E.B., il a écrit des musiques pour la danse, le théâtre, le \"spectacles totaux\", la télévision et le cinéma. Il est avant tout l'auteur d'installations et de spectacles mettant en scène des sculptures sonores et des machines musicales comme \"Bombyx\", le \"Double Quatuor à Cordes\", \"Concertomatique\", \"Léon et le chant des mains\", les \"Carillons\" N ° 1, 2 et 3, : « l'Orchestre des Machines à Laver » ainsi que ceux présentés au Musée des Arts Forains (Paris).\n\nDepuis 2014, son travail s'est concentré sur le développement des «Thermophones». La construction d’un orgue mobile de 40 Thermophones de 5ème génération a permis de créer en 2023 le spectacle-concert « Chœurs et Thermophones »",
                "date_modified": "2025-12-05T12:05:16.942583+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 69,
                        "forum_user": 627,
                        "date_start": "2025-12-05",
                        "date_end": "2026-12-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 344,
                                "membership": 69
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "REMUS",
            "first_name": "Jacques",
            "last_name": "Rémus",
            "bookmarks": []
        },
        "slug": "les-thermophones-recentes-evolutions-et-spatialisation",
        "pk": 1004,
        "published": true,
        "publish_date": "2021-11-21T18:36:56+01:00"
    },
    {
        "title": "Message Drone No.1 - Shangyang Yu",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>Message Drone No.1 is a multi-channel spatial sound installation. I&nbsp;sampled a series of message notification sounds in my&nbsp;mobile phone, tablet and other electronic devices. and&nbsp;slows down all&nbsp;these notification sounds by&nbsp;100 times. Through program control, the processed sounds&nbsp;will play continuously and randomly from different channel speaks, forming a massive drone&nbsp;of messages.</p>\r\n<p>&nbsp;</p>\r\n<p>Through this project, I reproduces the information overload situation we face in our daily life under the development of information and communication technologies. At the same time, through&nbsp;slowed down&nbsp;and&nbsp;listening to these&nbsp;message notification sounds, the act itself constitutes a resistance to&nbsp;the culture of speed in the accelerate society.</p>",
        "topics": [
            {
                "id": 1123,
                "name": "Concrete music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32915,
            "forum_user": {
                "id": 32867,
                "user": 32915,
                "first_name": "Shangyang",
                "last_name": "Yu",
                "avatar": "https://forum.ircam.fr/media/avatars/%E8%89%BA%E6%9C%AF%E5%AE%B6%E7%85%A7%E7%89%87.jpg",
                "avatar_url": "/media/cache/cc/7e/cc7edf7443a242f2e75dad957b62470f.jpg",
                "biography": "Shangyang Yu is a sound and media artist from China. With a background in fine art and public art, he is currently studying MA Information Experience Design at the Royal College of Art. His works mainly focus on video installation, experimental film, sound installation and live performance art. His creation comes from the observation of the contemporary local Internet environment. Through his works, he try to explore the intervention of current media in individual life and the individual identity in the virtual space and the real world at the information-ages.",
                "date_modified": "2023-02-05T23:37:53+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "shangyangyu",
            "first_name": "Shangyang",
            "last_name": "Yu",
            "bookmarks": []
        },
        "slug": "message-drone-no1",
        "pk": 2038,
        "published": true,
        "publish_date": "2023-02-06T01:04:53+01:00"
    },
    {
        "title": "Overview of research at IRCAM and its artistic and industrial applications by Hugues Vinet",
        "description": "",
        "content": "<p style=\"font-weight: 400;\"><span>The purpose of this conference is to provide an overview of IRCAM's research and its applications. After an introduction of IRCAM&rsquo;s organizational model combining scientific research, artistic production and technological innovation, it will present the current research themes in the field of sound and music technologies: sound analysis, synthesis and processing, 3D audio, computer-assisted composition, computer-assisted improvisation, gesture/sound interaction, real-time languages for interactive musical processes, sound cognition and design. Each theme will be illustrated by applications for artistic production, the creative and cultural industries and other fields of activity (automotive, healthcare, etc.).</span></p>\r\n<p style=\"font-weight: 400;\"><span>Duration : 1:30 with translation</span></p>\r\n<p style=\"font-weight: 400;\"><span>Bio : Hugues Vinet is Director of Innovation and Research Means of IRCAM. He has managed all research, development and innovation activities at IRCAM for the last 30 years. He co-founded and ran for several terms the STMS (Science and Technology of Music and Sound) joint lab with French Ministry of Culture, CNRS and Sorbonne University. He previously worked as senior engineer at the National Institute of Audiovisual in Paris where he managed the musical research and designed the first versions of the award-winning real-time audio processing GRM Tools product. He has coordinated many collaborative R&amp;I projects including recently the VERTIGO project managing a large-scale program of artistic residencies in European labs and the France 2030 Continuum project developing a new sound immersive experience for live performance. He is currently IRCAM&rsquo;s PI in the DAFNE+ EU project developing a blockchain/NFT based platform for fair distribution of digital artworks. He participates in various bodies of experts in the fields of audio, music, multimedia, information technology and innovation.</span></p>",
        "topics": [],
        "user": {
            "pk": 18210,
            "forum_user": {
                "id": 18203,
                "user": 18210,
                "first_name": "Hugues",
                "last_name": "Vinet",
                "avatar": "https://forum.ircam.fr/media/avatars/Hugues_Vinet_Portrait2017_large_low.jpg",
                "avatar_url": "/media/cache/4c/92/4c92397e1e69913141f89327eccc6007.jpg",
                "biography": "Hugues Vinet is Director of Innovation and Research Means of IRCAM. He has managed all research, development and innovation activities at IRCAM since 1994. He co-founded and ran for several terms the STMS (Science and Technology of Music and Sound) joint lab with French Ministry of Culture, CNRS and Sorbonne Université. He previously worked at the Groupe de Recherches Musicales of National Institute of Audiovisual in Paris where he managed the research and designed the first versions of the award winning real-time audio processing GRM Tools product. He has coordinated many collaborative R&D projects including recently H2020 VERTIGO in charge of the STARTS Residencies program managing 45 residencies of artists with technological research projects throughout Europe. He is currenty IRCAM's PI for EU MediaFutures project (artistic residencies for innovation in media) and DAFNE+ project dedicated to creatives' communities based on blockchain/NFT/DAO. He also curates the Vertigo Forum art-science yearly symposium at Centre Pompidou. He participates in various bodies of experts in the fields of audio, music, multimedia, information technology and innovation.",
                "date_modified": "2026-02-26T18:55:39.688865+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 417,
                        "forum_user": 18203,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "vinet",
            "first_name": "Hugues",
            "last_name": "Vinet",
            "bookmarks": []
        },
        "slug": "overview-of-research-at-ircam-and-its-artistic-and-industrial-applications-by-hugues-vinet",
        "pk": 3067,
        "published": true,
        "publish_date": "2024-10-24T14:39:17+02:00"
    },
    {
        "title": "sinusoidal run rhythm by Steffen Krebber",
        "description": "sinusoidal run rhythm is created by the addition of in-phase cosine functions in integer ratios.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p><em>&nbsp;</em></p>\r\n<p><em><span> &nbsp;&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1a19375f204e66f80403266b33f6d699.jpg\" width=\"407\" height=\"407\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0a943ec23b76318c2e9d9caab12dbd11.jpg\" width=\"387\" height=\"387\" /><span>&nbsp;&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4913dda24779543ce222f4caec4dd48a.jpg\" width=\"393\" height=\"393\" /><span>&nbsp;&nbsp;</span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/103b002d9e3148b8bcd7597d9a48ee83.jpg\" width=\"405\" height=\"405\" /></em></p>\r\n<p>Presented by : Steffen Kreber&nbsp;</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/steffenkrebber/\" target=\"_blank\">Biographie&nbsp;</a></p>\r\n<p>sinusoidal run rhythm is created by the addition of in-phase cosine functions in integer ratios. Their maxima are temporally and dynamically shifted in relation to corresponding notated rhythms and exhibit a physicality that is not present in discretely controlled rhythms.<span>&nbsp;</span><em>sinusoidal run rhythm</em><span>&nbsp;</span>models microtemporal grids with dynamic weightings that are singular for each underlying combination of partials, which are all very groovy.&nbsp;<span>&nbsp;</span><em>sinusoidal run rhythm</em><span>&nbsp;</span>therefore defines rhythm as a wave and thus clearly sets itself apart from the conventional rhythm theory of a European musical tradition.</p>\r\n<p><em>sinusoidal run rhythm</em><span>&nbsp;</span>is explicitly a supplement, specification and extension of existing rhythm models and rhythm theories and can integrate them.<br />The theory invites us to dissolve the boundaries between performance and performance, score and interpretation, man and machine and to search for applications in the genesis of music, music analysis, psychoacoustics or philosophy.</p>\r\n<p>A book and code were published by Wolke Verlag in 2023.</p>\r\n<p><a href=\"https://steffenkrebber.de/research/sinusoidal-run-rhythm/\" target=\"_blank\">https://steffenkrebber.de/research/sinusoidal-run-rhythm/</a></p>\r\n<p><a href=\"https://github.com/steffenkrebber/sinusoidalrunrhythm\" target=\"_blank\">https://github.com/steffenkrebber/sinusoidalrunrhythm</a></p>\r\n<p><a href=\"https://maxforlive.com/library/index.php?q=Krebber&amp;tag=all&amp;by=any\" target=\"_blank\">https://maxforlive.com/library/index.php?q=Krebber&amp;tag=all&amp;by=any</a></p>\r\n<p></p>\r\n<p></p>",
        "topics": [
            {
                "id": 647,
                "name": "Computer music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 272,
                "name": "Generative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2467,
                "name": "groove",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2466,
                "name": "microtiming",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 91,
                "name": "Music theory",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 88,
                "name": "Rhythm",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2468,
                "name": "rhythm theory",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2031,
            "forum_user": {
                "id": 2029,
                "user": 2031,
                "first_name": "Steffen",
                "last_name": "Krebber",
                "avatar": "https://forum.ircam.fr/media/avatars/krebber.jpeg",
                "avatar_url": "/media/cache/da/ba/dabad9c74af12a053b772832c9bdd455.jpg",
                "biography": "Steffen Krebber is composer, sound artist and researcher. His work oscillates between computer music, instrumental and electroacoustic composition, sound art, research, language, epistemology, sociology and media art. h Amongst others he has held scholarships from the Schloss Solitude Academy, the Schreyahn Artists Retreat. His music has been performed at the Gaudeamus Muziekweek, the Witten New Chamber Music Festival, the ‘blurred edges’ Festival of Current Music (Hamburg), the ‘new talents’ Biennale (Cologne), Nachtstrom (Basle) and Piano+ at the Karlsruhe Center for Art and Media. He has also exhibited his work at the KOLUMBA Art Museum of the Archbishopric of Cologne, the Cologne Arts Association and the Schloss Solitude Academy. His language installation Weissagungen (‘divinations’) entered the permanent collection of the KOLUMBA Art Museum. As a composer he has worked with a great many ensembles and performers.",
                "date_modified": "2025-04-02T10:43:17.668846+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "steffenkrebber",
            "first_name": "Steffen",
            "last_name": "Krebber",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3168,
                    "user": 2031,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "sinusoidal-run-rhythm",
        "pk": 3168,
        "published": true,
        "publish_date": "2025-01-31T16:24:14+01:00"
    },
    {
        "title": "Web-based educational spatial audio system and interfaces for Boulez’ Dialogue de l’ombre double by Alex Ruhtmann",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/image.tiff\" alt=\"\" /></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/ruthmann_headshot.png\" alt=\"\" width=\"350\" height=\"350\" /></div>\r\n<div class=\"c-content__button\">Presented by Alex Ruthmann</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/alexruthmann/\" target=\"_blank\">Biography</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<p>This presentation shares the development process and working version of a web-based &ldquo;spatial audio playground&rdquo; designed to present Pierre Boulez&rsquo;s seminal mixed-music work <em>Dialogue de l&rsquo;ombre double</em> (1985). This playground highlights the complementary roles that the composer, audio engineer/RIM, clarinetist, and audience members play in the presentation and experience of <em>Dialogue.</em> Originally conceived and prototyped as part of Dr. S. Alex Ruthmann&rsquo;s artistic research residency at IRCAM in 2020, this playground has been expanded to also serve as a web-based practice and performance realization environment for similar spatial audio pieces, in addition to its primary purpose as an interactive education tool for communicating the complexities of key spatial audio and mobile music works such as Pierre Boulez&rsquo; <em>Domains </em>(1961-1968), and Elliot Carter&rsquo;s <em>Clarinet Concerto </em>(1996)<em>. </em>This playground displays interactive score elements that allow a general audience member to follow visual representations and score excerpts in synchronization with binaural or discrete 6-channel spatialized audio recordings. Accessing the interface via mobile phone, tablet, or computer desktop, the audience member can place themselves in the virtual perspectives of the performer, audio engineer, or audience member, with the ability (where appropriate) to move around the space during the experience of the piece. These features also allow the user to be in live control of the sound spatialization cues, stepping into, rehearsing, and performing the live mixing roles of the audio engineer/RIM for <em>Dialogue. </em>The spatial audio playground may also be used by performers who wish to use a live web-audio environment to rehearse and present a live performance of <em>Dialogue</em>, with the added ability of being able to rehearse and experience the spatial audio aspects of the piece from the perspective of the audience or performer. A live audio engineer can also use the playground to realize the piece for performance.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>This spatial audio playground offers an innovative educational format for experiencing, studying, and listening to musical compositions with spatial audio and mobile music features. The presentation will demonstrate and present a tour of the playground system, including the various web-based spatial audio and processing technologies implemented, the design process, and specific educational designs that present the user with first person quotes and anecdotes from Boulez and collaborators providing deeper insight into the evolving compositional and creative processes contributed by the composer, audio engineers/RIMs, and performers who have been a part of this work&rsquo;s evolution over time.</p>",
        "topics": [],
        "user": {
            "pk": 92005,
            "forum_user": {
                "id": 91891,
                "user": 92005,
                "first_name": "Pierre",
                "last_name": "Provence",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/bb2fd89ea3d0aef48035393334059d96?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-04-30T12:50:45.553350+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1009,
                        "forum_user": 91891,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "provence",
            "first_name": "Pierre",
            "last_name": "Provence",
            "bookmarks": []
        },
        "slug": "web-based-educational-spatial-audio-system-and-interfaces-for-boulez-dialogue-de-lombre-double-by-alex-ruhtmann",
        "pk": 3355,
        "published": true,
        "publish_date": "2025-03-13T15:58:45+01:00"
    },
    {
        "title": "Monuments at the limit of the fertile trihedron. A note on extratemporal music and volumetric modelling sound synthesis",
        "description": "A proposal for a new electronic composition and synthesis method created in OpenMusic, complmented by the presentation of the score for the electronic piece \"Hors-Temps étude n.1\"",
        "content": "<div></div>\r\n<p><iframe width=\"1920\" height=\"1080\" title=\"Hors-Temps étude n.1.mp4\" src=\"https://player.vimeo.com/video/819158063?h=72c173f710&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p style=\"text-align: justify;\">What remains of music once time is removed from it? As paradoxical such a question may appear, its formulation may allow us to think deeper what music could really be &ndash; namely: something &shy;that was never meant to be simply listened to. If we suppress time, one could easily argue that we&rsquo;re also suppressing the possibility of music itself; but, on the other hand, the absence of a temporal dimension opens the chance to discover a network of underlying structures that touches on the ultimate nature of musical composition (whatever this nature may be). There we can recognize a system of relations that can be inspected and manipulated without caring about the ordering the unidirectionality of time inevitably imposes. The principal concern of this project will be thus to investigate the most radical implications of the notion of <em>hors-temps</em>, elaborated by architect and composer Iannis Xenakis (1922-2001) exactly to express that which, in the writing or in the analytical study of a musical piece, can always be thought without considering the &ldquo;before&rdquo; and &ldquo;after&rdquo; distinctions.</p>\r\n<p style=\"text-align: justify;\">The <em>hors-temps </em>categoryconcerns that which in music is independent of temporal becoming. This is not at all a strange new feature in Western compositional practices, considering that it already has been rooted for centuries in its most elementary operations, such as transposition, repetition, retrogradation, inversion and retrograde inversion. A melody is based on a temporal order of notes, its notation on a spatial order. When we play a written melody, its spatial order is converted into the temporal order of the sounds. But if we want to transform further the melodic phrase with the said operations, we must take it again &ldquo;outside time&rdquo;, treating it like an oriented planar figure, where the set of notes goes forwards and backwards, ascends and descends without reference to time. So, the aforementioned manipulations rely not on a temporal order, but on a spatial one. Transposition withdraws the melody from time, treating it as a geometric profile of pitches to be subjected to a vertical translation; whereas repetition applies on it a horizontal translation; inversion flips it &ldquo;upside down&rdquo; with a horizontal reflection; retrogradation, operating also on onset rhythmic contours, mirrors it with a vertical reflection; while the retrograde inversion results in a 180&deg; rotation of the starting material.</p>\r\n<p style=\"text-align: justify;\">In 1977 Xenakis and his collaborators brough to reality the prototype of an electronic tool for producing digital sound synthesis with strictly graphic methods, a hardware workstation baptized with the acronym UPIC (<em>Unit&eacute; Polyagogique Informatique du CEMAMu</em>, that is to say the <em>Centre d&rsquo;Etudes de Math&eacute;matique et Automatique Musicales</em>, founded by Xenakis himself). Its main interface consisted of a digitizing tablet reminiscent of an architect&rsquo;s desk, upon which the user could draw evolving frequency trajectories (here called &ldquo;sonic arcs&rdquo;) with the aid of an electromagnetic pen. On this tablet, every sonic arc was inscribed with its durational values on the abscissa and its pitch values on the ordinate. One could assign to each arc a waveform period, a pressure envelope and an overall loudness indication (resembling the score dynamics ranging from <em>pianissimo</em> to <em>fortissimo</em>). The tablet was connected to an analogue to digital converter, and the actions performed on it could be visualized on a screen or printed on paper with a copying machine. In substance, with the UPIC technology, organized sound was extracted literally out of time and displayed synoptically on a surface of a board as a series of drawings. Furthermore, this equipment allowed Xenakis generalizing the linear transformations in the plane with compressions and expansions of the sonic arcs (the UPIC&rsquo;s millimetric tablet could be arbitrarily changed in scale, the sounds could be shortened and stretched at will), as well as rotations through any angle.</p>\r\n<p style=\"text-align: justify;\">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The graphic scores created on the UPIC were remarkably different from the ones produced since the beginning of the second half of the XX century by the likes of E. Brown, M. Feldman, J. Cage, C. Cardew, M. Kagel, A. Logothetis and many others. In the latter cases, the visual aspect is prominent, reclaiming a considerable autonomy from the musical context to which it was linked &ndash; as if their authors, maybe unbeknownst to themselves, aspired to be refined, sophisticated drawers instead of composers. What is more, is that these experiments yield to rather nebulous notation systems, authorizing a great amount of arbitrariness and interpretative laxity. Since any precise instructions of how to decipher the drawings is often purposefully lacking, the notational symbolism is here necessarily vague and suggestive, and the resulting music bears always a remote, metaphorical connection to its score. On the contrary, within the UPIC framework the visual elements are directly translated into sound signals, without any interpretation mediated by analogies, synesthetic associations and other idiosyncratic readings.</p>\r\n<p style=\"text-align: justify;\">Also, the UPIC seems to establish an hierarchical privilege of sound over image, debunking the calligraphic flourishes and the unmotivated aesthetic exuberance presented by the indirect, analogical symbolism: for Xenakis, drawing was only a means to compose music, and was not seen as an end <em>per se</em>. Assuming that the graphic synthesis was simply a technique, and not a goal, the rendition of images into acoustic waves was understood to be an unidirectional conversion, where the visual aspects could be partially lost into the aural domain in which they have been encoded. In fact, UPIC allowed a mapping from drawings to sounds, but it was difficult, if not impossible, to recover with a spectral analysis the initial images from which the sounds were synthesized. In the first composition to be completed on the UPIC, <em>Myc&egrave;nes Alpha</em> (1978), undeniably serving as an essential case study in graphic sound synthesis, the score shows only the pitch <em>versus</em> time inscriptions, and the music reveals only vaguely in its spectrum the drawings behind it. The fuzziness of their original traces (partly erased, partly superimposed) is arguably due to the heavy presence of aliasing, as well as to the use of complex sound pressure envelopes not shown in the score. The same happens also in the later UPIC scores by Xenakis, like <em>Taurhiphanie</em> (1987-88) or <em>Voyage Absolu des Unari vers Androm&egrave;de</em> (1989). Now, this obfuscation is at the detriment of the perspicuity of notation and engenders some major discrepancies between the audible result and the graphical procedures that have led to it (although the differences are not as radical as in what we called the &ldquo;indirect symbolism&rdquo;). So, the UPIC scores run again the danger to be essentially an approximation or an on oblique evocation of the music.</p>\r\n<p style=\"text-align: justify;\">In order to get rid of these inaccuracies, the loudness envelopes should be displayed simultaneously in the score along with the pitches and the durations, but this requirement would force us to take a totally new direction, embracing a three-dimensional coordinate system, equipped with a &ldquo;trihedron of reference&rdquo; (as P. Shaeffer would say) &ndash; where the <em>x</em> is the frequency axis, <em>y</em> is the amplitude and <em>z</em> is time. Since the visual data should be reconstructible from a suitable analysis of the sound without loss of information, how we can achieve a full reversibility between music and virtual 3D objects? By that same reversibility, we could bring the notation to a previously unreached rigor and a give self-sufficiency to the visual aspect, without hiding it anymore in the sounds. This leads us to ask: what if the notation had exactly the same relevance as the musical output, representing an achievement valuable in its own right, being no longer a hypocritically undeclared ambition (as in the &ldquo;indirect symbolism&rdquo; mannerisms) or a mere tool instrumental to the composition (as in the UPIC)? These are some of the questions we tried to answer with our volumetric modelling graphic sound synthesis.</p>\r\n<p style=\"text-align: justify;\">First of all, we must begin elucidating the main reasons and advantages in support of the volumetric solution. Even if we can always conceive a spectrogram as a representation of tridimensional information plotted in a heightmap terrain, there are some strong limitations in using rasters of pixels encoding elevation values, soon to be highlighted. Usually, in a spectrogram, the amplitude of the spectrum for each time frame will be rendered by the brightness in a gray scale plot, where the higher values are mapped onto brighter pixels. Then, the 2D spectrographic image can be thought of as a view from the top of an irregular 3D surface generated by the sampled signal, with black representing minimum height (or distance from the floor of the surface) and white representing maximum height. If we can convert a sound spectrum into pixels, we can also perform the inverse operation. Here encoding images in sound is made possible by simply reversing the method of creation of a spectrogram, so that the brightness of a pixel is converted in an amplitude value, and its position in the raster in a frequency and a time value. We can then observe that while the degree of brightness in a gray scale spectrogram can carry some depth information, the result will be very akin to a low-relief, which is still a strong compromise between 2D and 3D. Hence, the choice of volumetric modelling is elicited by the many features which cannot be faithfully represented in a heightmap terrain. For example, cavities and meanderings cannot be shown in a heightmap due to only the elevation data being taken into account, leaving everything below unrepresented. The hollows that would otherwise be the inside of the holes, or the underside of arches and protrusions disappear as if a veil was laid over these objects.</p>\r\n<p style=\"text-align: justify;\">&shy;&shy;&shy;</p>\r\n<p style=\"text-align: justify;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8b7e8562641750841cd68b86e374314f.jpg\" /></p>\r\n<p style=\"text-align: justify;\">Image 1 &ndash; <em>Paving the way for the transition from 2D to 3D. </em>In OpenMusic we patched a series of algorithms to write a photograph as sound. We began A) creating the heightmap and B) storing it in a 1GB4 SDIF file; then C) we checked the content of the resulting file in SDIF-Edit; and D) we synthesized it with the SuperVP phase vocoder engine. As we can see in E), the resulting spectrogram displays correctly the photograph encoded in sound.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<p style=\"text-align: justify;\">Now, if we actually step into the realm of a 3D space, we must be able to visualize and manipulate entities with high topological complexity (translating them from and into sound). Thus, it becomes necessary to switch from pixels to voxels, and also from FFT spectra to sinusoidal models made for partial analysis and resynthesis. In a volumetric terrain, partials are easily described by successive sample points, where each point is a voxel, written as a triplet of (positive or negative) real numbers. In this 3D environment, partials are connected series of points, represented as breakpoint functions. Analysis and resynthesis of partials will then provide trajectories with instantaneous frequency (in Hz) and amplitude (linear) values changing along temporal sampling frames. The biunivocal relationship we encountered in the heightmap terrains between images and sounds is here confirmed, because every voxel corresponds to a sample point of a partial and conversely. This means that the partials&rsquo; breakpoints, as temporal indices with matching frequential and intensity parameters, can be understood as the breakpoints of 3D curves, and vice-versa. To test the efficacy of this correspondence, we decided to work with a genuine tridimensional architecture, full of arching shapes, curving meanders and warped surfaces. We elected Xenakis&rsquo; Philips Pavilion for the 1958 Brussels World Fair as our touchstone. Having found that the peculiar Pavilion&rsquo;s skeleton, made out of hyperbolic paraboloid and conoid shell portions, could be rebuilt simply with rectilinear segments, we successfully reconstructed it with our volumetric approach in the three dimensions of time, frequency and amplitude. As a final confirmation, the analyzed sinusoidal model revealed the original architectonic structure with its unaltered shapes.</p>\r\n<p style=\"text-align: justify;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/74236ce5f64f70091a7e3b1a5e2d77ff.jpg\" /></p>\r\n<p style=\"text-align: justify;\">&shy;</p>\r\n<p style=\"text-align: justify;\">Image 2 &ndash; <em>Reconstructing the Philips Pavilion in OpenMusic</em>. Designing the Philips Pavilion&rsquo;s ruled surfaces as a 3DC-lib, A) we used its <em>x</em>, <em>y</em> and <em>z</em> cartesian coordinates to write a text file read and exported as a 1TRC SDIF by the software SPEAR; then B) we extracted the SDIF content displaying its data again as a 3DC-lib, finding again the original 3D object we started with. Finally, C) we synthesized the SDIF with the PM2 additive synthesizer. The sonogram of the resulting sound shows the architectural model from our chosen frontal perspective, but since the Pavilion is a three-dimensional entity, it can be virtually encoded in sound from indefinitely many other points of view; hence, D) having applied a 180 degrees rotation on the roll axis, we&rsquo;ve also exported and synthesized it in SPEAR as seen from a rear perspective.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<p style=\"text-align: justify;\">After having tested the intuitive effectiveness and the reliability of our method, we proceeded in implementing the elementary compositional principles (the organization of melodic, intervallic and rhythmic patterns, as well as the assignment of durations and dynamics) through 3D geometric transformations.</p>\r\n<p style=\"text-align: justify;\">For instance, translation along the abscissa places the objects in 3D space according to the rhythmic onsets, upon which the distribution of synchronic (intervallic) and diachronic (melodic) pitches depends: the objects with the same onset value are treated as notes in a chord, while different onsets displace the objects as notes in a melody. Horizontal scaling, which expands or compress the size of the object by the <em>z</em> factors (the onset and offset time), determines the duration of each object; vertical scaling, by the <em>x</em> factors (the fundamental and the last harmonic Hz values) determines its pitch; depth scaling by the <em>y </em>factors (the minimum and maximum linear amplitude thresholds) determines its overall loudness. We recall incidentally that the link between depth and dynamics has historical roots that go back to G. Gabrieli&rsquo;s <em>Sonata pian&rsquo;e forte</em> (1597). In fact, dynamics hierarchize acoustic elements like depth arranges visible elements between foreground, middle ground and background, so that the louder sounds result in being placed closer to the perceiver, while the softer ones are placed farther from him.</p>\r\n<p style=\"text-align: justify;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5f73f161772cf870a4bb0c247d9fe5d8.jpg\" /></p>\r\n<p style=\"text-align: justify;\">Image 3: <em>Basic musical operations as 3D transformations</em>. Using a single sound object (a sphere) as a building block, we can notice how the several compressions and expansion of its original size visualize the duration scaling (as horizontal deformations on the time plane) and the pitch scaling (as vertical deformations on the frequency plane), whereas the velocity scaling is expressed by the displacements of the shapes in the amplitude plane. All the scaling and translation information is here derived from MIDI values stored in a chord-seq object.</p>\r\n<p style=\"text-align: justify;\">But more importantly, a music generated and varied in a 3D space brings us results that are simply irreducible to the elementary transformation techniques that have governed the compositional strategies until now. We have seen that<em> the variety of transformation techniques of musical material depends completely on the writing medium within which these transformations are performed</em>. If the manipulations are applied to the pentagram&rsquo;s space, the performable &ldquo;outside time&rdquo; actions are restricted to the vertical and horizontal translation, the vertical and horizontal reflection, the 180&deg; rotation and their combinations. Unfortunately, Xenakis never fully investigated the real potential of the <em>hors-temps</em> manipulations offered by his invention, since even the UPIC can be seen as a mere revisitation of the common notation techniques, relying exclusively on the pitch <em>versus</em> time plane. According to this paradigm, imprisoned in the traditional flatness of the score, the idea of a sound studied <em>hors-temps</em> remains confined to the bidimensional space.</p>\r\n<p style=\"text-align: justify;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c2723f9c58179567f33deccd959d503f.jpg\" /></p>\r\n<p style=\"text-align: justify;\">Image 4: <em>Views from the Hors-Temps &eacute;tude n.1 score. </em>This is an overview of the score's 3D landscapes with the corresponding sonogram, which shows them as 2D frontal perspectives.</p>\r\n<p style=\"text-align: justify;\">Therefore, our aim was to produce a concrete example of a new extratemporal music, where every aspect of the composition, down to the tiniest detail, is observed and shaped turning around 3D sonic objects in all directions. In these explorations, we discovered a wide array of plastic operations (such as the folding, twisting, bending, smoothing, morphing, carving, rippling of partials) all documented in the electronic piece <em>Hors-Temps &eacute;tude n.1</em>, and its accompanying<em> </em>score, written in freely navigable 3D PDF metadata. Here, the visual morphologies influence and are influenced by the acoustic ones, because they presuppose each other in an inextricable interdependence. Accordingly, the piece is based on a process (the volumetric modelling sound synthesis) which at once generates the music and its representation, constituting both the aural content of the composition and the evidence that such composition has taken place, <em>i.e.,</em> its notation. Moreover, instead of limiting ourselves to administer the partials and model their structures visually, we have constructed entire visual landscapes made of partials; instead of controlling sound graphically, plastically or architectonically we are drawing, sculpting and even building architectures with sound.</p>",
        "topics": [],
        "user": {
            "pk": 25,
            "forum_user": {
                "id": 25,
                "user": 25,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b73b11f2bbfe3c18953a9a232c4a1186?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-12-28T22:42:36.542021+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 443,
                        "forum_user": 25,
                        "date_start": "2020-06-30",
                        "date_end": "2025-09-24",
                        "type": 0,
                        "keys": [
                            {
                                "id": 530,
                                "membership": 443
                            },
                            {
                                "id": 869,
                                "membership": 443
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "francesco_vitale",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "monuments-at-the-limit-of-the-fertile-trihedron-a-note-on-extratemporal-music-and-volumetric-modelling-sound-synthesis",
        "pk": 2216,
        "published": true,
        "publish_date": "2023-04-19T15:13:21+02:00"
    },
    {
        "title": "ᚺᛟᛁᚱ - HEYR - HEAR - HERE - Anders Vinjar",
        "description": "A presentation of the project \"ᚺᛟᛁᚱ - HEYR - HEAR - HERE\" @ IRCAM forum 2023",
        "content": "<p><img alt=\"HEYR - banner\" src=\"https://forum.ircam.fr/media/uploads/user/b0cc5ba2f1d697f9eec9275585e7ba38.png\" /></p>\r\n<ul>\r\n<li>3D soundspaces, advanced recording techniques</li>\r\n<li>reaching another type of audience</li>\r\n<li>sonic stills: presented in special installations, binaural streaming, part of concert programs</li>\r\n<li>intersection between journalism and sound-art</li>\r\n<li>physical presence where news happens and history is shaped</li>\r\n</ul>\r\n<p>We can point our eyes towards what we want to see, and close them if we choose. &nbsp;Ears are naive, full sphere, 24/7.</p>\r\n<p>Kabul, Bamyan, Brandenburger Tor, Ut&oslash;ya, Bataclan, Euromaidan,Borporos, Ground Zero, Auschwitz, Capitol... &nbsp;- \"Potential Places\" - shaping news, headlines, history - our understanding of the world we live in.</p>\r\n<p>Click-Baits, Fake-News, Truth-By-Volume, biased comments, modern days Orientalism. &nbsp;<em>HEYR</em> explores 3D field-recordings and immersive listening - combined with the knowledge we already have of events and places - to build space for another kind of reflection about those very potent events, places, people, culture.</p>\r\n<p>Arts meeting journalism - reaching out to another very interested audience: ordinary people, activists, immigrants, students, polticians...</p>",
        "topics": [
            {
                "id": 1183,
                "name": "3D field-recordings",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27858,
            "forum_user": {
                "id": 27830,
                "user": 27858,
                "first_name": "Anders",
                "last_name": "Vinjar",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/bc7dc70136665844d3a27a5621ee335c?s=120&d=retro",
                "biography": "While studying ethnomusicology and linguistics, Anders experimented with potentials of programming-languages and AI-techniques to work on issues of music-analysis. He got interested in using the same tools to create music, stopped studying and started composing.\r\n\r\nMain interests are acousmatic music and other electroacoustic art, algorithmic composition, DSP and programming for music. He spends most of his composition-hours either doing field-recordings or musical programming inside functional programming-environments for music such as OpenMusic, Common Music, CLM, SuperCollider and other FLOSS-ware.\r\n\r\nOutput includes concert-music of various kinds, installations, music for movies, streams/web-art, hacks, applications, workshops, lectures, occasional articles etc.",
                "date_modified": "2025-02-25T12:42:00.893923+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 401,
                        "forum_user": 27830,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 6,
                                "membership": 401
                            },
                            {
                                "id": 367,
                                "membership": 401
                            },
                            {
                                "id": 398,
                                "membership": 401
                            },
                            {
                                "id": 539,
                                "membership": 401
                            },
                            {
                                "id": 545,
                                "membership": 401
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "vinjar",
            "first_name": "Anders",
            "last_name": "Vinjar",
            "bookmarks": []
        },
        "slug": "-heyr-hear-here-1",
        "pk": 2089,
        "published": true,
        "publish_date": "2023-02-27T10:58:34+01:00"
    },
    {
        "title": "Designing Acoustics in Virtual Worlds by Benoît Alary",
        "description": "From the reproduction of existing rooms to imaginary spaces: creative use of sound technologies for immersive worlds.",
        "content": "<div class=\"WordSection1\">\r\n<div class=\"WordSection1\">\r\n<p>With the rise of new media technologies, we are increasingly immersed in virtual worlds. Ranging from&nbsp;navigating a scene in augmented reality to art installations and live music performances, the different ways we reproduce sound immersively are quickly evolving to adapt to new realities.</p>\r\n<p>But how do we reproduce sounds and design the acoustics of these virtual worlds? More importantly, when complex algorithms such as acoustic simulations and artificial intelligence are used to determine how a space sounds like, how can we get creative control back to create the experiences we envision? In his presentation, Benoit Alary&nbsp;(researcher, IRCAM/EAC) will review the methods and trends in immersive sounds, such as virtual&nbsp;acoustics and artificial reverberation, and will also demonstrate some practical approaches used for immersive audio.</p>\r\n<p></p>\r\n<h6 style=\"text-align: center;\"><img src=\"/media/uploads/espro_1-0x520.jpg\" alt=\"\" width=\"780\" height=\"520\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></h6>\r\n<h6 style=\"text-align: center;\">L'espace de projection &copy; &Eacute;ric Laforge</h6>\r\n<div></div>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 24564,
            "forum_user": {
                "id": 24537,
                "user": 24564,
                "first_name": "Benoit",
                "last_name": "Alary",
                "avatar": "https://forum.ircam.fr/media/avatars/BA_2021_06.jpg",
                "avatar_url": "/media/cache/27/b3/27b31b6ef7aaf23499bed29603125e56.jpg",
                "biography": "Benoit Alary is a researcher in the Acoustic and Cognitive Spaces team of the STMS lab, part of IRCAM. He has over fifteen years of experience in immersive audio, shared between industry and academia, including a Ph.D. in acoustics and signal processing from Aalto University (Finland) and an MSc from the University of Edinburgh. His research centers around sound reproduction, analysis/synthesis, and perception. His current projects involve artificial reverberation, 6DoF sound reproduction, machine learning, and virtual acoustics.",
                "date_modified": "2025-11-07T10:18:43.509252+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 317,
                        "forum_user": 24537,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-07",
                        "type": 0,
                        "keys": [
                            {
                                "id": 566,
                                "membership": 317
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "balary",
            "first_name": "Benoit",
            "last_name": "Alary",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3070,
                    "user": 24564,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "designing-acoustics-in-virtual-worlds-by-benoit-alary-1",
        "pk": 3070,
        "published": true,
        "publish_date": "2024-10-24T15:55:23+02:00"
    },
    {
        "title": "Emergence - écouter/voir au-delà du genre dans les nouvelles représentations d'opéra - Felicity Wilcox",
        "description": "M. Wilcox évoquera les objectifs du projet Emergence : transformer les systèmes qui marginalisent les femmes et les créateurs de musique diversifiés sur le plan du genre, encourager l'expression musicale non sexuée, promouvoir des pratiques opératiques mondiales inclusives et utiliser les données relatives au genre pour un art engagé sur le plan social.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\">Pr&eacute;sent&eacute; par : Felicity Wilcox<br /><a href=\"https://forum.ircam.fr/profile/felicity/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\">Felicity Wilcox b&eacute;n&eacute;ficie d'une importante bourse du Conseil australien de la recherche, qui a pour but de centrer les genres traditionnellement marginalis&eacute;s sur la r&eacute;imagination des approches de la narration et de la composition de l'op&eacute;ra. Le principal r&eacute;sultat de son projet de recherche est un nouvel op&eacute;ra contemporain intitul&eacute; EMERGENC/y, actuellement en cours de d&eacute;veloppement. L'un des principaux objectifs de la recherche de Mme Wilcox est d'&eacute;tudier les moyens d'amplifier les voix des divers compositeurs en proposant des solutions pratiques cr&eacute;atives &agrave; la marginalisation g&eacute;n&eacute;ralis&eacute;e et de longue date des cr&eacute;atrices de musique et des cr&eacute;ateurs de musique de genre. Dans cet expos&eacute;, le Dr Wilcox pr&eacute;sentera les objectifs g&eacute;n&eacute;raux du projet Emergence en &eacute;tudiant : la mani&egrave;re dont les syst&egrave;mes de pratique qui ont traditionnellement marginalis&eacute; les femmes et les cr&eacute;ateurs de musique de genre divers peuvent &ecirc;tre transform&eacute;s ; la mani&egrave;re dont de nouvelles m&eacute;thodologies et approches peuvent inclure divers agents dans la cr&eacute;ation et la repr&eacute;sentation d'op&eacute;ra contemporain ; la mani&egrave;re dont de nouvelles interfaces pour l'expression musicale peuvent engager la cr&eacute;ativit&eacute; musicale &agrave; travers une lentille non sexiste ; la mani&egrave;re dont la pratique op&eacute;ratique &eacute;mergente et mondiale s'engage dans des approches cr&eacute;atives inclusives, interactives et interdisciplinaires ; et la mani&egrave;re dont les donn&eacute;es sur le genre dans la musique peuvent &ecirc;tre traduites en un art socialement engag&eacute; qui contribue au changement culturel.</p>\r\n<p style=\"text-align: justify;\">Ces solutions englobent des cadres plus inclusifs pour la cr&eacute;ation d'op&eacute;ra qui soutiennent la cr&eacute;ativit&eacute; distribu&eacute;e, dont le Dr Wilcox parlera. Il s'agit notamment de l'improvisation guid&eacute;e et de l'\"&eacute;coute profonde\" (Oliveros 2005 ; 2011), ainsi que d'autres m&eacute;thodologies qui englobent l'&eacute;coute f&eacute;ministe (Lehmann &amp; Palme 2022) et l'&eacute;cologie acoustique (Westerkamp 2002), en raison de l'alignement de ces domaines sur des approches non hi&eacute;rarchiques de la cr&eacute;ation et de la r&eacute;ception d'&oelig;uvres musicales/sonores. Ce type d'&eacute;coute a &eacute;galement de larges r&eacute;sonances dans les syst&egrave;mes de connaissances indig&egrave;nes globaux (voir Neale &amp; Kelly 2020 ; Robinson 2020), et la compositrice discutera de ses exp&eacute;riences r&eacute;centes avec les Premi&egrave;res Nations australiennes concernant l'utilisation d'outils de surveillance biom&eacute;trique invasifs pour la sonification des donn&eacute;es dans son design d'interaction dans l'&oelig;uvre en cours de discussion.</p>\r\n<p style=\"text-align: justify;\">Le Dr Wilcox d&eacute;taillera ensuite l'application des nouvelles technologies actuellement en cours de d&eacute;veloppement dans son op&eacute;ra, o&ugrave; elle invite les interpr&egrave;tes &agrave; manipuler leur hauteur et leur timbre de voix pour d&eacute;fier les st&eacute;r&eacute;otypes sexistes li&eacute;s &agrave; la voix. Elle pr&eacute;sentera les m&eacute;thodologies d'interaction qu'elle est en train de d&eacute;velopper et partagera les sons et les images g&eacute;n&eacute;r&eacute;s lors d'ateliers r&eacute;cents avec des interpr&egrave;tes par la manipulation en temps r&eacute;el de la voix &agrave; l'aide de contr&ocirc;leurs gestuels portables.</p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\"><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 64039,
            "forum_user": {
                "id": 63972,
                "user": 64039,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_8779_RET.jpg",
                "avatar_url": "/media/cache/78/dc/78dcf1e8ecb4a188a691352016d23195.jpg",
                "biography": "Dr Felicity Wilcox is an Australian composer and academic specialising in artistic research and research on gender in music. She is a Senior Lecturer in Music and Sound Design at the University of Technology Sydney. As a composer and music director, Felicity has enjoyed a long career in both the concert hall and in interdisciplinary contexts. She has been described as ‘one of Australia’s most versatile and prolific composers’ (Limelight 2023) and ‘an important voice in contemporary classical music’ (Daily Telegraph 2021). She has composed the soundtracks to over 60 screen productions (as Felicity Fox), and was Assistant Music Director and Composer for the Paralympic Games Opening Ceremony in Sydney 2000. Her music is commissioned by Australia’s leading organisations and is performed in across Australia, from regional settings to iconic venues such as Sydney Opera House. Internationally, her works have been programmed in USA, South Korea, France, UK, Germany & Finland. She is currently the recipient of an Australian Research Council fellowship to pursue a 3-year research project to investigate gender in music through a podcast, journal articles and a new opera, titled EMERGENC/y.",
                "date_modified": "2024-02-15T02:01:25.774634+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "felicity",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "emergence-listeninglooking-beyond-gender-in-new-opera-performance",
        "pk": 2722,
        "published": true,
        "publish_date": "2024-02-13T07:32:54+01:00"
    },
    {
        "title": "Black Hole Museum + Body Browser - as a part of Focus Taiwan C-LAB",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris as a part of Focus Taiwan C-LAB.",
        "content": "<p class=\"s3\"><strong><span class=\"s7\">I</span><span class=\"s7\">ntroduction</span></strong></p>\r\n<p class=\"s3\"><span class=\"s4\">Conceived by Taiw</span><span class=\"s4\">anese artist SU Wen-Chi, Dancing Gravity is an experimental performance project on how to imagine and perceive abstract gravity in astronomy with the movement of dance, </span><span class=\"s4\">sound</span><span class=\"s4\"> and light. Originated in </span><span class=\"s4\">Accelerate@CERN</span><span class=\"s4\">Taiwan (Geneva) in 2016 and supported by t</span><span class=\"s8\">he</span><span class=\"s8\"> </span><span class=\"s8\">Curtis R. </span><span class=\"s8\">Priem</span><span class=\"s8\"> Experimental Media and Performing Arts </span><span class=\"s8\">Center</span><span class=\"s8\"> </span><span class=\"s8\">(EMPAC) in the</span><span class=\"s4\"> U.S. 2019, its current working-in-progress Black Hole Museum + Body Brower is evolved with C-LAB Taiwan Sound Lab; inviting dancer, sound and VR artists jointly explore aspects of performing in a VR spacetime, along with reflections on the epidemic prevention and border control, on how we meet offsite. </span><span class=\"s4\">YiLab</span><span class=\"s4\"> founded by SU Wen-Chi comprises new media and performance artists seeking to present new performing formats. Active in the global art scene, they have performed in </span><span class=\"s4\">Kunsten</span><span class=\"s4\"> festival des arts (Brussels),</span><span class=\"s4\"> </span><span class=\"s4\">La </span><span class=\"s4\">B&acirc;tie</span><span class=\"s4\"></span><span class=\"s4\">(Geneva) and Performance Space (Sydney).</span></p>\r\n<p class=\"s6\"><span>&nbsp;</span></p>\r\n<p class=\"s6\"><span><img src=\"/media/uploads/bhmbb_05_photo_&copy;_yilab.png\" width=\"1552\" height=\"873\" /></span></p>\r\n<p class=\"s6\"><span></span></p>\r\n<p class=\"s3\"><strong><span class=\"s7\">C</span><span class=\"s7\">o-Production</span></strong></p>\r\n<p class=\"s3\"><span class=\"s4\">YiLab</span><span class=\"s4\">, </span><span class=\"s4\">T</span><span class=\"s4\">aiwan Contemporary Culture Lab (C-LAB)</span></p>\r\n<p class=\"s3\"><strong><span class=\"s7\"></span></strong></p>\r\n<p class=\"s3\"><strong><span class=\"s7\">C</span><span class=\"s7\">reation Team</span></strong></p>\r\n<p>Concept/ Workshop Planning: Wen-Chi SU</p>\r\n<p>Choreography: Wen-Chi SU, Li-Wei TU</p>\r\n<p>Dancer: Li-Wei TU</p>\r\n<p>Sound Design/ Acoustic Treatment/ WFS Support: Ping-Sheng WU</p>\r\n<p>Scenography Design: Huei-Ming CHANG</p>\r\n<p>Black Hole Museum VR Design: Wen-Yee HSIEH</p>\r\n<p>Body Browser VR Design : Yu-Jei HUANG</p>\r\n<p>VR Program Integration/ Motion Capture: Yu-Jie HUANG</p>\r\n<p>Scientific Partner: Diego Blas</p>\r\n<p>Workshop/ Rehearsal Assistant: Hai-Wen HSU</p>",
        "topics": [],
        "user": {
            "pk": 31229,
            "forum_user": {
                "id": 31182,
                "user": 31229,
                "first_name": "Tom",
                "last_name": "Debrito",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d239346e0c19ec2b960555378b5fe912?s=120&d=retro",
                "biography": "Tom Debrito was the Events Coordination Manager of the IRCAM Forum for the year 2022-2023, as part of a work-study contract.\n\nHe was in charge of the coordination of the Forum Workshops 2022 with the New York University, the Forum Workshops 2023 in Paris and the Forum Workshops 2023 in Taipei in collaboration with the C-LAB. In addition, he handles communication and marketing related tasks to help the development of the IRCAM Forum.",
                "date_modified": "2023-10-30T12:25:43.859854+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 389,
                        "forum_user": 31182,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "debrito",
            "first_name": "Tom",
            "last_name": "Debrito",
            "bookmarks": []
        },
        "slug": "black-hole-museum-body-browser",
        "pk": 2066,
        "published": true,
        "publish_date": "2023-02-15T16:58:16+01:00"
    },
    {
        "title": "\"Archisonic\" by Misucmaker",
        "description": "Archisonic stages an inquiry into how architecture might be heard as music. Centered on concert halls as the clearest meeting point between space and sound, the project gathers shared principles—such as perception, rhythm, and harmony—within a single aesthetic frame. Using Iannix, AI-assisted tools, and live composition environments, each performance becomes a real-time search for the possible music embedded in a building’s form, proportions, and acoustics.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><strong>Archisonic</strong><span> proposes a research-driven artistic exploration at the intersection of architecture, sound, and artificial intelligence. The project investigates how architectural spaces can be approached not only as visual or functional structures, but as acoustic, temporal, and compositional entities. By treating architecture as a resonant body and sound as a spatial memory carrier, ARCHISONIC seeks to develop new forms of listening, composition, and performance.</span></p>\r\n<div>\r\n<p>Rooted in architectural analysis, sound archaeology, and experimental music practices, the project explores how spaces can &ldquo;write&rdquo; music, and how music, in return, can reveal hidden structures, proportions, and affective layers of architecture.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2195a1cafeef7486848c6b3ae663a62f.png\" /></p>\r\n<p><strong>Conceptual Point of Departure: Architecture as a Compositional System</strong><br />ARCHISONIC begins from the premise that architectural space carries latent musical potential. Proportion, material, volume, geometry, and circulation are treated not as static features, but as compositional forces: parameters that can be read, mapped, and performed.</p>\r\n<p>Instead of writing music for a space, the project proposes composing from the space and its lived history. A building is approached as a time-bearing body, shaped by its making, its materials&rsquo; aging, its changing functions, and the layers of use it has accumulated. Architectural form is translated into sonic structure, rhythm, texture, and temporal development, allowing the building&rsquo;s biography, its past and the life it has hosted, to become an active agent in the musical process.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a6096afa04a93c3ef440585f250238b6.png\" /></p>\r\n<p><strong>AI as a Translational Instrument</strong><br />In ARCHISONIC, artificial intelligence is not positioned as an autonomous creator, but as a mediating and interpretive tool between architecture and sound. Machine learning models are used to analyze, transform, and re-read architectural data, acoustic recordings, and spatial characteristics.</p>\r\n<p>The project deliberately employs AI as an imperfect translator. Its slippages, artifacts, and misreadings, especially when confronted with complex, non-linear, historically charged spaces, are approached through a hauntological lens, where translation becomes a site of spectral overlap between what a place is, what it remembers, and what it can still sound like.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f737072612225a28bbb5d9f4f34bad04.jpg\" /></p>\r\n<p><strong>Methodological Approach</strong><br />The research and performance are structured around three interconnected layers:</p>\r\n<ul>\r\n<li>Architectural Analysis<br />Spatial geometry, materiality, proportions, and circulation are studied through drawings, measurements, and historical references.</li>\r\n<li>Sound Capture and Transformation<br />Field recordings, impulse responses, resonances, and ambient sounds of architectural environments are collected and transformed into compositional material.</li>\r\n<li>AI Support<br />Rather than functioning as an autonomous author, AI operates as a conceptual support and translational partner: an instrument that proposes mappings, reveals hidden patterns, and offers productive misreadings that the performer curates, reshapes, or rejects in real time.</li>\r\n</ul>\r\n<p><strong>Performance as a Spatial Dialogue</strong></p>\r\n<p>The live performance functions as a dialogue between the performer and the architectural space. The performer navigates between concrete spatial references, form, proportion, material presence, resonance, and the unfolding sonic decisions made in the moment.</p>\r\n<p>Each performance is inherently site-sensitive and non-reproducible. Musical form emerges through interaction rather than a fixed score, allowing the space to continuously reshape timing, density, texture, and intensity.</p>\r\n<p>Rather than illustrating architecture, the performance attempts to listen to it, revealing its temporal, acoustic, and affective dimensions.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/eeb440cabad6961e10489c34cc267bb6.png\" /></p>\r\n<p><strong>A Field to Explore</strong><br />ARCHISONIC is driven by a set of open research questions:</p>\r\n<ul>\r\n<li>\r\n<p>How can architecture be perceived and composed as sound?</p>\r\n</li>\r\n<li>\r\n<p>What kinds of musical structures emerge from spatial proportions and materials?</p>\r\n</li>\r\n<li>\r\n<p>How can AI be used as conceptual support for composition?</p>\r\n</li>\r\n<li>\r\n<p>Can performance become a method of architectural analysis and research?</p>\r\n</li>\r\n</ul>\r\n<p>These questions are not approached as problems to be solved, but as territories to be explored through artistic practice.</p>\r\n<p><strong>What It Seeks to Reveal</strong><br />ARCHISONIC aims to contribute to ongoing discussions on interdisciplinary research between music, architecture, and emerging technologies, while reframing this field from a musicological perspective. Since much of the existing work at this intersection is often architecture-led, the project positions its methods and outcomes as a contribution to the musicological corpus as well as to spatial studies.</p>\r\n<p>By combining artistic performance with analytical reflection, ARCHISONIC proposes modes of knowledge production that operate beyond disciplinary boundaries. Ultimately, it treats listening as a critical and creative act, capable of revealing unseen dimensions of space, memory, and human presence within architecture.</p>\r\n</div>",
        "topics": [
            {
                "id": 3972,
                "name": "Archisonic",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4125,
                "name": "HearingArchitecture",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3973,
                "name": "Music&Architecture",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 153970,
            "forum_user": {
                "id": 153746,
                "user": 153970,
                "first_name": "Alperen",
                "last_name": "Yalçın",
                "avatar": "https://forum.ircam.fr/media/avatars/DSC08213.jpg",
                "avatar_url": "/media/cache/52/49/52494dacfe6ec7766a84a754fb3dba1a.jpg",
                "biography": "Misucmaker is an independent artist and researcher working at the intersection of music and architecture. Under the name Misucmaker, he releases original tracks and performs live. Through his project Archisonic, he explores how space, form, and acoustics shape listening. He is a Musical Director and Producer for Sofar Ankara, supporting curation, production, and live performance documentation. He is also a member of Blakhol, an electronic music collective, and works across sound, visuals, and performance.",
                "date_modified": "2026-02-23T17:58:43.081148+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1277,
                        "forum_user": 153746,
                        "date_start": "2026-01-04",
                        "date_end": "2027-01-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "misucmaker",
            "first_name": "Alperen",
            "last_name": "Yalçın",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4136,
                    "user": 153970,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "archisonic",
        "pk": 4136,
        "published": true,
        "publish_date": "2026-01-04T13:28:43+01:00"
    },
    {
        "title": "Heat Death of the Universe",
        "description": "As we grow older, we strive for order, stability, and equilibrium in our lives, sometimes at the expense of change, progress, and motion. Here is my experimental short film \"Heat Death of the Universe\", first installment of my Stasis Trilogy.",
        "content": "<p><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/55htpDoB_hI\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p>When people are at the early stages of their lives, as they are not yet lulled into the comfortable numbness of order, routine, and predictability, they are more free in their decisions. They are not afraid of making mistakes. As they get a job, move out from their parents&rsquo; place, get into serious relationships, build a home, buy some furniture, replace some of those with nicer ones, decorate their home exactly as they like it, balance their bank account, figure out all their income, all the bills, all the mortgage payments, and divvy up the remaining amount into various, age-appropriate social and cultural activities, a holiday, once a split-second decision, becomes a one-weeks-notice affair, then a part of their yearly holiday plans, then a flat out impossibility. They wouldn&rsquo;t want to take the risk of disrupting the delicate order and balance they have built into their lives, with much, much effort. They might stop doing a lot of the things they used to do, so as not to endanger the house of cards they have oh so carefully built. This video depicts a couple, so afraid of losing the order and structure they have built into their lives, they are now unable to move a finger.</p>\r\n<p><a name=\"__DdeLink__230_559306323\"></a>I have likened their situation to the concept known as <em>the heat death of the universe</em>. Every interesting or exciting event in the universe happens not due to energy, but a difference in energy. When a mass of very warm air meets a mass of cold air, that creates turbulence. During this initial period they swirl into each other and create a plethora of interesting patterns. However, as time passes, temperatures of the two masses equalize. They settle down. After the thermal equilibrium is reached, nothing interesting happens ever again. Heat death of the universe is the same thing, but on a universal scale. When every star depletes all of its fuel, when every piece of matter reaches thermal equilibrium, when there is no energy difference, no useful energy, the universe will turn into a lukewarm, dark, lifeless place, and stay like that until the end of time.</p>\r\n<p><img src=\"/media/uploads/user/1fbbd83a75c91e7ada26062f44629e21.gif\" alt=\"facial motion capture\" width=\"600\" height=\"338\" /></p>\r\n<p>For the music, I have used pre-recorded sounds, because for so long they have been the elementary particles of western classical music, and because in their recorded form, they are unable to change ever again. I have used sounds from the archive of Istanbul Composers Collective, and some snippets from other sources. At first, I wanted to use CatArt but it wasn&rsquo;t able to handle the large amounts of samples I needed to use. So I have manually categorized around 1500-2000 samples into 4 different Sampler instruments in Ableton. Multi-layered structure of sounds, seeming barrage of new ideas, as opposed to developing an existing idea, hide the fact that almost the entirety of the music may be reduced into more or less two chords. Much like the fact that we fill our lives with glamorous, but ultimately unnecessary things, to create the illusion that something <em>is</em> happening, that the universe is not dead, yet.</p>\r\n<p>For the 3D models, I have used an open source software called Meshroom. I took 997 pictures of the models (my wife Deniz Kureta, and my friend Mithatcan Ocal), and my living room to create the 3D setting of the video. Then I imported all the models into a 3D animation/design software called Blender, which is also open source. Inside Blender, I have treated the virtual room as a film set and shot the video using 3 cameras. I have also used Blender for facial motion capture. This has been the first time I have used either of these two programs, so this was all possible thanks to various open source communities throughout the web.</p>\r\n<p>This experimental short film, &ldquo;Heat Death of the Universe&rdquo; is the first installment of my Stasis Trilogy.</p>",
        "topics": [
            {
                "id": 194,
                "name": "3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 195,
                "name": "Concrete",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 196,
                "name": "Fixed-media",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 127,
                "name": "Video",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 61,
            "forum_user": {
                "id": 61,
                "user": 61,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/36752538526b328cb5c451a19a257b0d?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-15T21:13:03.953441+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "kureta",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "heat-death-of-the-universe",
        "pk": 251,
        "published": true,
        "publish_date": "2019-10-17T01:05:45+02:00"
    },
    {
        "title": "Latent Audio Effects (LAFX) by Kyungsu Kim",
        "description": "This project explores the possibilities of sound design and music production by manipulating compressed latent spaces within neural audio codecs.",
        "content": "<h2><strong>Latent Audio Effects (LAFX) </strong></h2>\r\n<h2><strong>- Kyungsu Kim</strong></h2>\r\n<p><strong>Neural audio codecs</strong> are at the forefront of audio AI technology, leveraging neural networks to compress audio data to unprecedented levels while maintaining high perceptual audio quality. This capability allows for the efficient storage and transmission of high-quality audio through the encoding of signals into a compressed latent space. Audio codecs have played a crucial role in the advancement of music and industry, with recent innovations in neural codecs pushing the boundaries even further by providing enhanced compression techniques without compromising sound integrity.</p>\r\n<p>While the primary goal of audio codecs is to preserve audio content within a compressed format, artists and musicians have explored the unique artifacts from lossy audio codecs like MP3 as a medium for artistic expression, utilizing tools such as <a href=\"https://forum.pdpatchrepo.info/topic/8735/mp3-glitcher\">MP3 glitching</a> and <a href=\"https://www.mechlabindustries.com/mechlab-productions/databending/\">databending</a>. Building on this spirit, our project seeks to investigate the <strong>possibilities of using neural audio codecs for artistic applications beyond their original intent.</strong></p>\r\n<p>We have discovered that by <strong>manipulating the compressed representation within neural audio codecs, we can introduce unique sonic features</strong> that are difficult to achieve through traditional signal processing in sample space. This allows us to explore sonic characteristics that are distinct from the effects typically obtained in sample space. Particularly, operations such as mixing multiple audio tracks in the latent space, rather than the sample space, offer distinct sonic characteristics. Furthermore, injecting noise into the latent space has been found to produce unique audio textures. Currently, our ongoing research is exploring additional effects such as delay, saturation, and matrix multiplication within this latent space. Preliminary results, along with audio samples, are available on this <a href=\"https://kyungsukim.notion.site/Latent-Audio-Effects-LAFX-4e7259b83c754d069b5a6322fda0b8fb?pvs=4\">page</a>.</p>\r\n<p>This project aims to explore the creative applications of neural audio codecs, focusing on how manipulating audio within the latent space can lead to distinct auditory effects. By experimenting with various types operations in the compressed domain, we seek to uncover new possibilities for sound design and music production, offering musicians and sound designers new tools to expand their creative palette and push the limits of what can be achieved in audio processing.<br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9dc3e262fe6cd735e5ed5dbceaac575e.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/bf9c8200833d831d57fb7d6b029eb2a8.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /><br /><br /></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 1774,
                "name": "neural synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 85425,
            "forum_user": {
                "id": 85324,
                "user": 85425,
                "first_name": "Kyungsu",
                "last_name": "Kim",
                "avatar": "https://forum.ircam.fr/media/avatars/Profile.jpg",
                "avatar_url": "/media/cache/df/7c/df7cdd51d0d84207aea12e65620ff241.jpg",
                "biography": "I am currently pursuing my Ph.D. at Seoul National University in the Music and Audio Research Group. My research focuses on developing innovative methods of music creation through the application of artificial intelligence technologies. I am particularly interested in designing new concepts of musical instruments and creating tools that enhance the creativity of musicians in the music production process. My work aims to bridge the gap between traditional music composition and cutting-edge AI techniques, enabling new forms of creative expression and pushing the boundaries of what is possible in music production. Through this research, I hope to contribute to the evolution of music by offering musicians novel ways to expand their creative potential.",
                "date_modified": "2024-10-21T10:03:25.276599+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 971,
                        "forum_user": 85324,
                        "date_start": "2024-10-21",
                        "date_end": "2025-10-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "lotrueve",
            "first_name": "Kyungsu",
            "last_name": "Kim",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3025,
                    "user": 85425,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "latent-audio-effects-lafx",
        "pk": 3025,
        "published": true,
        "publish_date": "2024-10-11T10:14:05+02:00"
    },
    {
        "title": "Analog ring modulation: a DIY approach inspired by Rodrigo F. Cadiz - historical techniques",
        "description": "We present a device that implements a ring modulator intended to recreate historical analog audio. Drawing inspiration from historical realizations of ring modulation, we experimented with the most common methods described in the literature. Our device, which is comparatively inexpensive, improves upon the aforementioned alternatives from a DIY perspective.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p>Presented by : Rodrigo F. Cadiz&nbsp;</p>\r\n<p><strong><a href=\"https://forum.ircam.fr/profile/rcadiz/\" target=\"_blank\">Biography</a></strong></p>\r\n<p>We present a device that implements a ring modulator intended to recreate historical analog audio. As simple as it sounds, the multiplication of two analog audio signals is not a straightforward task, mainly due to phenomena such as saturation or non-linearity. Drawing inspiration from historical realizations of ring modulation, we experimented with the most common methods described in the literature. Our device, which is comparatively inexpensive, improves upon the aforementioned alternatives from a DIY perspective. It is predominantly analog, adhering to the practices of the first decades of electroacoustic music, with the notable exception of the digital control for the sinusoidal carrier. We aim to strike a balance between analog audio technology and modern components and techniques. Additionally, this paper includes audiovisual demonstrations that showcase the practical uses of our ring modulator. A<span>&nbsp;</span><a href=\"https://github.com/rodrigocadiz/analog_ring_mod\">comprehensive GitHub repository</a><span>&nbsp;</span>provides open access to all the resources required for DIY enthusiasts to replicate our implementation.</p>\r\n<p>&lt;table style=\"border-collapse: collapse; width: 100%;\" border=\"1\"&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td style=\"width: 50%;\"&gt;&nbsp;&lt;/td&gt; &lt;td style=\"width: 50%;\"&gt;&nbsp;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td style=\"width: 50%;\"&gt;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/ab457c54c8397c4f6e188bc3860b2ea2.png\" />&lt;/td&gt; &lt;td style=\"width: 50%;\"&gt;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/925c8bd6feca48551f90f6ab5d6d7661.png\" />&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt;</p>\r\n<p>This project was funded by the Office for Arts and Culture of the Vice Presidency for Research, Pontificia Universidad Cat&oacute;lica de Chile and by ANID Anillo ATE220041.</p>\r\n<p><a href=\"https://github.com/rodrigocadiz/analog_ring_mod\" title=\"Github Repository\">Link to the Github Repository</a></p>",
        "topics": [
            {
                "id": 2533,
                "name": "analog audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2351,
                "name": "DIY",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2532,
                "name": "ring modulation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 11397,
            "forum_user": {
                "id": 11394,
                "user": 11397,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Cadiz400.png",
                "avatar_url": "/media/cache/66/17/66178bdd36ff7a6b11f4439b5e9a41e1.jpg",
                "biography": "Rodrigo F. Cádiz (1972) is a Chilean composer, researcher and engineer. He studied composition and electrical engineering at the Pontificia Universidad Católica de Chile (UC) in Santiago and he obtained his Ph.D. in Music Technology from Northwestern University. His compositions, consisting of approximately 70 works, have been presented at several venues and festivals in Latin America, North America and Europe. His catalog considers works for solo instruments, chamber music, symphonic and robot orchestras, visual music, computers, and new interfaces for musical expression, in particular brain-computer interfaces and the Arcontinuo, a new electronic musical instrument he has been working on with two more colleagues for the past 15 years. He has authored around 70 scientific publications in peer-reviewed journals and international conferences. His areas of expertise include sonification, sound synthesis, audio digital processing, computer and electroacoustic music, composition, new interfaces for musical expression and the musical applications of complex systems. He is currently full professor at the Music Institute and Department of Electrical Engineering at UC.",
                "date_modified": "2025-09-20T02:22:52.763354+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1041,
                        "forum_user": 11394,
                        "date_start": "2017-10-16",
                        "date_end": "2026-01-10",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "rcadiz",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "analog-ring-modulation-a-diy-approach-inspired-by-historical-techniques",
        "pk": 3207,
        "published": true,
        "publish_date": "2025-02-19T19:38:42+01:00"
    },
    {
        "title": "CCRMA summer seminar: introduction to bach",
        "description": "Introduction to bach: summer course at CCRMA. Online seminar, from August 30th to September 3rd. Early enrollment price (50$ discount) ends on August 20th.",
        "content": "<p>Hello everyone,</p>\r\n<p>like last year, Andrea Agostini, Julien Vincenot, Davor Vincze and myself will hold a beginner-level summer seminar on the bach library for Max at CCRMA (Stanford University). It is an online seminar, from August 30th to September 3rd. The course will be held from 9am to 1pm Pacific Time, which means 18pm to 22pm Central European time.</p>\r\n<p>More information and the course syllabus can be found here:<br />https://ccrma.stanford.edu/workshops/bach-in-maxmsp</p>\r\n<p>To enroll:<br />https://www.eventbrite.com/o/ccrma-summer-workshops-33124778619</p>\r\n<p>Early enrollment is still valid till August 20th and gives a 50$ discount (bach patrons get an additional 50$ discount).</p>\r\n<p>The syllabus is given as a reference, so that people who are already quite familiar with most of the topics may refrain from enrolling. Notice that it is NOT a very advanced seminar, so those of you who already do crazy things with bach &amp; co. probably will not need it. However, it is meant to give a good, all-round overview of several subjects, so it may also be useful to fill in some blanks.</p>\r\n<p>Best,<br />Daniele Ghisi</p>",
        "topics": [
            {
                "id": 347,
                "name": "Cnmat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 375,
            "forum_user": {
                "id": 375,
                "user": 375,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/35e1cc14164e2b11037f9652f4f11972?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "danieleghisi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ccrma-summer-seminar-introduction-to-bach",
        "pk": 978,
        "published": true,
        "publish_date": "2021-08-17T00:17:14+02:00"
    },
    {
        "title": "CHASING WATERFALLS - Un opéra d'intelligence artificielle - Sven Soren Beyer",
        "description": "En 2022, l'opéra d'IA \"Chasing Waterfalls - An Artificial Intelligence Opera\" a été présenté pour la première fois au Semperoper de Dresde et au Hong Kong New Vision Arts Festival. Le collectif d'artistes berlinois phase7 performing.arts, T-Systems MMS et le collectif kling klang klong de Berlin ont présenté une voix d'opéra générée par l'IA et entraînée spécialement pour cet opéra. Le directeur créatif de phase7, Sven Sören Beyer, et l'ingénieur en arts Frieder Weiss parleront du travail avec l'IA dans l'opéra, de ses défis et de ses beautés.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;&nbsp;Sven S&ouml;ren Beyer<br /><a href=\"https://forum.ircam.fr/profile/sven777/\">Biography</a></p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1c3d7612be6bd723bf74267a341e54b1.jpg\" width=\"814\" height=\"1162\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Dans \"Chasing Waterfalls - An Artificial Intelligence Opera\", certaines parties de la composition, du livret et de la sc&eacute;nographie sont cr&eacute;&eacute;es par l'intelligence artificielle. L'impr&eacute;visibilit&eacute; de cette intervention d&eacute;termine la dramaturgie et perturbe les structures narratives classiques. Un partenaire cr&eacute;atif, mais artificiel, influence le d&eacute;roulement de la repr&eacute;sentation. En partageant nos informations, nos images, nos opinions et nos pr&eacute;f&eacute;rences sur Internet, chacun d'entre nous cr&eacute;e un alter ego, un jumeau num&eacute;rique. Cependant, la cr&eacute;ation et le contr&ocirc;le de cet alter ego ne sont plus enti&egrave;rement entre nos mains. Des machines et des programmes g&eacute;n&egrave;rent notre reflet num&eacute;rique, nous guidant et nous orientant &agrave; travers des publicit&eacute;s et des suggestions cibl&eacute;es sur le web. Ils limitent nos choix, nous poussent &agrave; prendre des d&eacute;cisions et influencent activement notre vie. Notre autod&eacute;termination diminue &agrave; l'&egrave;re num&eacute;rique. La pi&egrave;ce ma&icirc;tresse de notre op&eacute;ra multim&eacute;dia est une intelligence artificielle (IA) programm&eacute;e, un processus d'apprentissage profond qui agit simultan&eacute;ment en tant qu'interpr&egrave;te, compositeur et cr&eacute;ateur du spectacle. Dans ce monde num&eacute;rique, \"chasing waterfalls\" raconte l'histoire du d&eacute;veloppement d'un personnage principal, \"Ego fluens\", le moi fluide, qui prend diff&eacute;rentes formes. Jou&eacute; par 7 interpr&egrave;tes, cet \"Ego fluens\" - pouss&eacute; par la curiosit&eacute; et le d&eacute;sir d'&eacute;panouissement - ne reste pas confin&eacute; &agrave; la perception qu'il a de lui-m&ecirc;me, mais fait na&icirc;tre des formes qui d&eacute;passent toutes les limites pr&eacute;d&eacute;termin&eacute;es.&nbsp;L'&acirc;ge, le sexe, l'origine ethnique, la couleur de la peau - tout change constamment. Au fur et &agrave; mesure que la performance progresse, il devient de plus en plus difficile de savoir si \"Ego\" agit de mani&egrave;re autonome ou non. Son ind&eacute;pendance devient une suggestion. Par cons&eacute;quent, le personnage principal fait l'exp&eacute;rience de sa propre mall&eacute;abilit&eacute; et de sa dilution dans un monde num&eacute;rique fluide. Un autre point central de l'op&eacute;ra est la participation du public. Pour ce faire, les spectateurs peuvent se faire scanner le visage en 3D avant la repr&eacute;sentation. &Agrave; partir des donn&eacute;es d'image, une projection est g&eacute;n&eacute;r&eacute;e sur sc&egrave;ne, rendant la machine visuellement visible et permettant &agrave; chaque spectateur de d&eacute;couvrir une partie de lui-m&ecirc;me lors de la repr&eacute;sentation.</p>\r\n<p>\"Chasing Waterfalls\" est une coproduction du collectif d'artistes phase7 performing.arts Berlin avec le Semperoper Dresden et le Hong Kong New Vision Arts Festival avec T-Systems MMS comme partenaire de projet et avec le soutien de la Fondation S&auml;chsische Semperoper.</p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 748,
                "name": "co-creativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1833,
                "name": "opera",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1832,
                "name": "performing arts",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1834,
                "name": "singing AI",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 63268,
            "forum_user": {
                "id": 63201,
                "user": 63268,
                "first_name": "Sven Soren",
                "last_name": "Beyer",
                "avatar": "https://forum.ircam.fr/media/avatars/phase7_Logo_no_claim_white_background_euroscale_Coated_v2.jpg",
                "avatar_url": "/media/cache/ad/07/ad0730603b972b22508d8428d3de5e13.jpg",
                "biography": "Sven Sören Beyer is founder and artistic director of Berlin-based artist collective phase7 performing.arts. The interplay between humans and machines is a catalyst for the artistic discourse of phase7. This leads to the creation of performative productions and installations with international reach and a digital affinity, which may seem utopian at the time of their development but prove to be sustainably progressive in the international art context. The range of projects by phase7 spans from operas like Morton Feldmann's \"Neither\" to AI operas like \"Chasing Waterfalls\" as well as large-scale events like the opening of the European Capital of Culture Bodø 2024 or the celebrations for the 25th and 30th anniversaries of the fall of the Berlin Wall at the Brandenburg Gate. The current thematic focus of phase7 is the examination of the autonomization of artificial intelligence and its virtual and real effects on modern society and the individual self: what happens when machine learning algorithms interact with humans as creative processes and partners?",
                "date_modified": "2024-10-23T14:20:23.391147+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sven777",
            "first_name": "Sven Soren",
            "last_name": "Beyer",
            "bookmarks": []
        },
        "slug": "chasing-waterfalls-an-artificial-intelligence-opera",
        "pk": 2763,
        "published": true,
        "publish_date": "2024-02-21T17:47:17+01:00"
    },
    {
        "title": "Shoonya",
        "description": "The name shoonya refers to zero, which is the start and the end just like a sand journey and us people who are on constant evolution but are still the same.",
        "content": "<p>Have you ever looked really close at a grain of sand who&rsquo;s surrounded by millions of billions that look just like it? Having a closer look, every grain has a different story that is shaped and influenced by our origins, time and our environment. However, from a distance they&rsquo;re one big apparently homogeneous mass.&nbsp;</p>\n<p><br>Starting from the concept of &lsquo;unity in diversity&rsquo; we set on the journey of transcending boundaries of cultural differences, the politicising of differences over the commonalities that exist just like people.&nbsp;<br>Informed by Shravni and Tanya&rsquo;s Indian heritage, they lean on to the classic of ragas, which are perceived differently throughout the width of the country but have a constant delivery system. Shoonya is a personal retrospective of reiterating \"cross culture\" existence, for interpretation through the journey of sand, where elemental commonality between people and sand act as a metaphor.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 39705,
            "forum_user": {
                "id": 39651,
                "user": 39705,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/6B55B6F1-E7EA-4D11-8E85-E11CE11F02C0.JPEG",
                "avatar_url": "/media/cache/e9/ed/e9ed9fb13c604570462c5b9135d3f670.jpg",
                "biography": "Informed by Shravni and Tanya’s Indian heritage, delving into discoveries in shared childhood memories they lean on to the classic of ‘Kabir ke dohe (tr: Kabir’s verses)’, reiterating cross-culture existence, for interpretation. With diverse languages and dialects, the message remains the same. In order to celebrate the love and reciprocal love and virtues that binds all humankind. We wish to use this opportunity to take two simultaneous approaches to identify the emotion of Doha as we weave them together to send across ‘unity in diversity’ as a message.",
                "date_modified": "2024-11-06T20:21:16.248295+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 953,
                        "forum_user": 39651,
                        "date_start": "2024-10-07",
                        "date_end": "2025-10-07",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "shoonyaa",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3019,
                    "user": 39705,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "shoonya-1",
        "pk": 2152,
        "published": true,
        "publish_date": "2023-03-20T20:44:16.313162+01:00"
    },
    {
        "title": "Atelier DAFNE+ : Minting content on the platform - Hugues Vinet, Greg Beller, Guillaume Piccarreta, Salah Eddine Chaouch, Miller Puckette, Serge Lemouton",
        "description": "\"DAFNE+ propose aux créateurs de contenus numériques de nouvelles formes de création, de distribution et de monétisation de leurs œuvres d'art grâce à la technologie blockchain. \"Cet atelier, dispensé dans le cadre des Ateliers du Forum IRCAM @Paris 2024, vous donne accès à la toute nouvelle plateforme.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sentateurs&nbsp;: Hugues Vinet, Greg Beller, Guillaume Piccarreta, Salah Eddine Chaouch, Miller Puckette, Serge Lemouton</p>\r\n<h1>DAFNE+: Une plateforme pour l'archivage et la promotion de la musique exp&eacute;rimentale et de la production sonore&nbsp; &nbsp;</h1>\r\n<p>Mercredi 20 mars&nbsp;- IRCAM dans la salle Shannon de 14h30 &agrave; 16h30.</p>\r\n<p>La plateforme DAFNE+ est con&ccedil;ue pour r&eacute;pondre &agrave; l'&eacute;volution des besoins des cr&eacute;ateurs de contenus num&eacute;riques, en leur fournissant des outils innovants pour la cr&eacute;ation, la distribution et la mon&eacute;tisation de leurs &oelig;uvres artistiques par le biais de la technologie blockchain. \"L'un des principaux objectifs du projet est de rendre la distribution de contenu &eacute;quitable\".</p>\r\n<p>De mani&egrave;re intuitive et simple, sans avoir besoin de connaissances techniques en mati&egrave;re de blockchains/NFT, les communaut&eacute;s cr&eacute;atives sont invit&eacute;es &agrave; rejoindre l'organisation autonome d&eacute;centralis&eacute;e (DAO) offrant de nouveaux services et outils qui permettent la cr&eacute;ation et la cocr&eacute;ation de contenu dans une blockchain. La recherche de DAFNE+ se concentre &eacute;galement sur la d&eacute;finition de nouveaux mod&egrave;les d'affaires &agrave; travers la distribution de contenu, permettant aux cr&eacute;ateurs et aux utilisateurs de mon&eacute;tiser les cr&eacute;ations multim&eacute;dias.</p>\r\n<p>Le r&ocirc;le de l'IRCAM dans DAFNE+ est notamment d'organiser une communaut&eacute; d'artistes et de fournisseurs de technologies sur la musique et le son &eacute;lectroniques. A mi-chemin entre le Forum de l'IRCAM et Sidney, l'archive du r&eacute;pertoire musical interactif, et bas&eacute;e sur une organisation autonome et une infrastructure distribu&eacute;e, la plateforme permettra aux artistes, chercheurs et ing&eacute;nieurs de partager et de mon&eacute;tiser des &eacute;l&eacute;ments de technologie pour la production de musique et d'&oelig;uvres d'interpr&eacute;tation - biblioth&egrave;ques, patchs, documentations...&nbsp;</p>\r\n<h2>Agenda de l'atelier :</h2>\r\n<ul>\r\n<li><span>Introduction au projet DAFNE+&nbsp; // Hugues Vinet (5min)</span>\r\n<ul>\r\n<li><span>Quel contenu et pourquoi le t&eacute;l&eacute;charger ? // Greg Beller (5min)</span></li>\r\n<li><span>Archivage de la musique &eacute;lectronique - Sidney // Serge Lemouton (10min)</span></li>\r\n<li><span>Exemples de pi&egrave;ces frapp&eacute;es // Miller Puckette (10min)</span></li>\r\n</ul>\r\n</li>\r\n<li><span>Atelier pratique - Mint a content // Greg Beller, Guillaume Piccarreta and Salah Eddine Chaouch (30 min)</span>\r\n<ul>\r\n<li>De l'enregistrement &agrave; la plateforme...</li>\r\n<li><span>...au march&eacute; NFT</span></li>\r\n</ul>\r\n</li>\r\n<li><span>R&eacute;troaction et discussions // all (50 min)&nbsp;</span></li>\r\n<li><span>R&eacute;capitulation et prochaines &eacute;tapes... (10 min)</span></li>\r\n</ul>\r\n<h2>Links:</h2>\r\n<ul>\r\n<li><span>Website:<span>&nbsp;</span></span><a href=\"https://dafneplus.eu\"><span>https://dafneplus.eu</span></a></li>\r\n<li><span>Platform:<span>&nbsp;</span></span><a href=\"https://dafneplus.eng.it/\"><span>https://dafneplus.eng.it</span></a></li>\r\n<li><span>Discord:<span>&nbsp;</span></span><a href=\"https://discord.gg/aR6VvV9Ttw\"><span>https://discord.gg/aR6VvV9Ttw</span></a></li>\r\n<li><span>Survey:<span><span>&nbsp;</span></span><a href=\"https://forms.gle/czcJyXhmthFkN5V48\">https://forms.gle/czcJyXhmthFkN5V48</a><a href=\"https://forms.gle/czcJyXhmthFkN5V48\"></a></span></li>\r\n<li><span>YT tutorials playlist:&nbsp;<a href=\"https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ\">https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ</a></span></li>\r\n<li><span>Newsletter:<span>&nbsp;</span></span><a href=\"https://dafneplus.eu/contact\"><span>https://dafneplus.eu/contact</span></a></li>\r\n<li>Contact:<span>&nbsp;</span><a href=\"mailto:info@dafneplus.eu\">info@dafneplus.eu</a><a href=\"mailto:info@dafneplus.eu\"></a></li>\r\n<li>Pr&eacute;sentation:<span>&nbsp;</span><a href=\"https://forum.ircam.fr/article/detail/dafne-launch-of-the-platform-for-the-preservation-and-promotion-of-experimental-music-and-sound-production\">https://forum.ircam.fr/article/detail/dafne-launch-of-the-platform-for-the-preservation-and-promotion-of-experimental-music-and-sound-production<br /><br /><br /></a></li>\r\n</ul>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1255,
                "name": "EU project",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1856,
                "name": "platform",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1096,
                "name": "workshop",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dafne-workshop-minting-content-on-the-platform",
        "pk": 2814,
        "published": true,
        "publish_date": "2024-03-07T12:59:21+01:00"
    },
    {
        "title": "Trembling Chamber: Designing a Spatial Electroacoustic Instrument with the Language of Butterflies by Zhao Jiajing",
        "description": "This presentation explores the concept and techniques of sound design in \"Trembling Chamber\", an 8-channel sound installation that examines the symbolism of vibration in the process of metamorphosis. It highlights the design of a spatial electroacoustic instrument utilising transducers and transparent films to transform inaudible vibrations into an immersive sonic experience that is also simultaneously visible and tactile.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p style=\"text-align: left;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c9f608b9cdd2fefbcacba525b3e12429.jpg\" width=\"762\" height=\"517\" /></p>\r\n<p style=\"text-align: left;\">Picture&nbsp;cr. Wenxin Zhang</p>\r\n<p>Presented by : Zhao Jiajing</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/zhao-jiajing/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p><em>Trembling Chamber</em><span>&nbsp;</span>is a 8-channel sound installation co-created by Jiajing Zhao and Wenxin Zhang, originally exhibited at \"The Larva of Time\" exhibition at ICA, NYU Shanghai, in the summer of 2024. The work uses vibration as both a symbol and medium to create a fictional field of emotional and physical communication between humans and insects.</p>\r\n<p>The sound design explores various modes of vibration&mdash;&ldquo;the language of larvae and butterflies&rdquo;&mdash;throughout the metamorphic process. The sonic installation, which I consider a spatial electroacoustic instrument, features surface transducers mounted on printed transparent films, enveloping the listener as if within a living bio-organism. This configuration allows for the sonification and visualisation of vibrations, including those in the inaudible infrasound frequency range. Using this animated, dynamic instrument, I composed an 8-channel piece that translates the vibrational language of butterflies into an immersive experience. The spatialisation of sound employs a hybrid approach realised through Max/MSP and Spat.</p>\r\n<p>For the presentation, a recording of the immersive composition will be played through a surround sound system, accompanied by a pair of transducer-driven sound panels (representing a 1/4 of the original installation). Participants will have the opportunity to play the spatial instrument via a custom-designed interface developed with Max MSP and Mira. Alongside, I will discuss how the artistic<span>&nbsp;</span><span>concepts are realised through the sound design and delve into the technical aspects of the installation.</span></p>",
        "topics": [
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1707,
                "name": "installation sonore",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2346,
                "name": "instrument design",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1792,
                "name": "Interdisciplinary",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17907,
            "forum_user": {
                "id": 17901,
                "user": 17907,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/375A5895_1-squashed.jpg",
                "avatar_url": "/media/cache/cc/6a/cc6a91e696c92b89cb2a32c0eba47ddd.jpg",
                "biography": "Zhao Jiajing is a composer, sound designer, and interdisciplinary artist based in London. \n\nZhao Jiajing’s artistic practice encompasses sound, installation, and new media, exploring themes such as temporality, technology, digital cultures, and nature. Since 2019, he has been deeply engaged in spatial sound, creating multichannel compositions and installations.\n\nZhao Jiajing’s works have been presented internationally at events and places such as the New York City Electroacoustic Music Festival (US), Soundcinema Düsseldorf (DE), Espacios Sonoros (AR), Sound/Image Festival (UK), IRCAM (FR), Barbican Centre (UK), Lisboa Incomum (PT), and Sound Art Museum (CN), among many others. As a multi-skilled composer and sound designer, he has collaborated with pioneering theatre groups, performers, and visual artists, creating projects that captivate audiences worldwide. Zhao is also the founder and director of Soundworlds Studio, a London-based immersive sound design studio.\n\nZhao holds an MA in Information Experience Design from the Royal College of Art and is currently pursuing a PhD in Electroacoustic Music at the University of the Arts London (CRiSAP) under Adam Stanović.",
                "date_modified": "2025-12-31T17:06:33.182751+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "zhao-jiajing",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "trembling-chamber-designing-a-spatial-electroacoustic-instrument-with-the-language-of-butterflies-by-zhao-jiajing",
        "pk": 3188,
        "published": true,
        "publish_date": "2024-12-28T12:16:27+01:00"
    },
    {
        "title": "Présentation de groupe, EIE, Conservatoire de Xinghai (Guangzhou, Chine)",
        "description": "Présentations et démonstrations par des étudiants en ingénierie des instruments électroniques du Conservatoire de musique de Xinghai.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par : Marco Bidin and his students&nbsp;<span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Zheng Yizhong,&nbsp;</span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Bin Yuan (Tyler)</span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">,&nbsp;</span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Xiao Xiongwen</span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">,&nbsp;</span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Wang Zichong<br /></span><a href=\"https://forum.ircam.fr/profile/mbalea/\">Biographie Marco Bidin</a></p>\r\n<p></p>\r\n<p><strong>Zheng Yizhong</strong></p>\r\n<p><em>Kontakt flute sound samples</em></p>\r\n<p>Cette&nbsp;source d'&eacute;chantillonnage sonore provient d'un instrument de musique traditionnel chinois, qui fait partie de la cat&eacute;gorie des instruments &agrave; vent. Ses techniques de jeu sont tr&egrave;s &eacute;tendues et on les trouve rarement sur le march&eacute; des sources sonores &eacute;chantillonn&eacute;es.</p>\r\n<p>Au cours du processus de production, quatre types de techniques de jeu ont &eacute;t&eacute; &eacute;chantillonn&eacute;s, &agrave; savoir l'articulation, la percussion, le son vibratoire et le vibrato, ce dernier &eacute;tant divis&eacute; en deux techniques.</p>\r\n<p>Le plug-in Waves a &eacute;t&eacute; utilis&eacute; pour le post-traitement pendant la post-production, et le r&eacute;sultat final a &eacute;t&eacute; int&eacute;gr&eacute; dans le logiciel Kontakt pour &ecirc;tre utilis&eacute; comme source sonore Kontakt soft. Pour une exp&eacute;rience plus concr&egrave;te, vous pouvez l'essayer sur place !</p>\r\n<p><br /><strong>Xiao Xiongwen</strong></p>\r\n<p><em>Beatbox humain</em></p>\r\n<p>Ce&nbsp;que je&nbsp;suis en train de d&eacute;montrer est ce que l'on appelle le Beatbox humain, qui est n&eacute; aux &Eacute;tats-Unis et qui est un nouvel &eacute;l&eacute;ment du hip-hop apparu dans les ann&eacute;es 1980, une culture musicale qui a prosp&eacute;r&eacute; au d&eacute;but des ann&eacute;es 2000. Je vais incorporer le Bbox dans notre chanson pour que tout le monde puisse d&eacute;couvrir le charme unique du Beatbox.</p>\r\n<p>J'ai utilis&eacute; un microphone cardio&iuml;de &agrave; bobine mobile, dont les performances sont l&eacute;g&egrave;rement meilleures dans les m&eacute;diums et qui offre un son plus &eacute;quilibr&eacute; sur l'ensemble des fr&eacute;quences, ce qui le rend id&eacute;al pour une utilisation dans divers contextes. Il me permet de mettre en valeur le charme du Beatbox.</p>\r\n<p><br /><strong>Bin Yuan (Tyler)</strong></p>\r\n<p><em>FMB Sequencer</em></p>\r\n<p>Synth Bass &amp; FM Percussion With Sequencer 16 steps and Effects taken by the idea FBM Sequence Tool。</p>\r\n<ol>\r\n<li>S&eacute;quenceur FM qui utilise la multiplication du pitch, puis connect&eacute; &agrave; la forme d'onde triangulaire de la forme d'onde FM, tout en &eacute;tant connect&eacute; au module de capture, &agrave; travers les diff&eacute;rents seuils de pitch r&eacute;gl&eacute;s pour capturer, de mani&egrave;re &agrave; r&eacute;aliser le r&eacute;glage en temps r&eacute;el de la bo&icirc;te &agrave; rythmes FM.<br /><br /></li>\r\n<li>La section Bassline est d'abord configur&eacute;e avec un module de contr&ocirc;le de panneau, qui facilite le contr&ocirc;le visuel direct de toutes les fonctions connect&eacute;es. Les r&eacute;glages de pitch et de pitch grab sont similaires &agrave; ceux du s&eacute;quenceur FM, mais les r&eacute;glages de forme d'onde sont diff&eacute;rents. J'ai configur&eacute; trois formes d'onde : dent de scie, impulsion et triangle, qui peuvent &ecirc;tre utilis&eacute;es s&eacute;par&eacute;ment ou ensemble pour cr&eacute;er des effets sonores synth&eacute;tiques.<br /><br /></li>\r\n<li>Les effets sonores cr&eacute;&eacute;s par le s&eacute;quenceur FM et Bassline sont finalement int&eacute;gr&eacute;s dans la table de mixage, puis connect&eacute;s aux trois modules d'effets pour ajouter de la richesse au son. Les trois effets sonores sont le d&eacute;lai, le phaser et la r&eacute;verb&eacute;ration.<br /><br /></li>\r\n<li>Dans la conception du panneau, afin d'augmenter l'effet visuel, j'ai ajout&eacute; un module de spectre &agrave; deux positions, en fonction des ajustements de d&eacute;clenchement du son ou de l'effet et des changements de la forme d'onde ou de l'&eacute;clairage, de sorte que le son et le rythme des changements dans le panneau soient directement visibles, et en m&ecirc;me temps, augmentent les sens directs de l'utilisateur.</li>\r\n</ol>\r\n<p><br /><strong>Wang Zichong</strong></p>\r\n<p><em>RADARR Syst&egrave;me de jeu semi-automatique</em><br /><em>Synth&eacute;tiseurs de la s&eacute;rie Metal-E</em></p>\r\n<p>Je suis un passionn&eacute; de musique &eacute;lectronique qui s'int&eacute;resse &agrave; divers aspects de ce domaine. Ceux-ci incluent la conception de plugins audio ind&eacute;pendants, l'&eacute;chantillonnage de sons, la production d'instruments virtuels, le s&eacute;quen&ccedil;age de synth&eacute;tiseurs logiciels MIDI et la conception d'interactions sonores. Parmi mes travaux les plus remarquables, on peut citer les synth&eacute;tiseurs et s&eacute;quenceurs de la s&eacute;rie Metal-E ainsi que le syst&egrave;me de jeu semi-automatique RADARR.</p>\r\n<p>Le RADARR Semi-automatic playing system est un plugin cr&eacute;&eacute; &agrave; l'aide de Max/MSP &agrave; des fins d'&eacute;dition. Il se compose de bo&icirc;tes &agrave; rythmes, de synth&eacute;tiseurs et de sections de pistes. Ce plugin permet une commutation et une modulation rapides des pistes audio, ce qui garantit des performances en direct fluides et stables. En outre, il permet l'arrangement et l'ex&eacute;cution de diverses formes de musique &eacute;lectronique.</p>\r\n<p>Les synth&eacute;tiseurs de la s&eacute;rie Metal-E sont cr&eacute;&eacute;s &agrave; l'aide de la technologie de synth&egrave;se audio FM qui &eacute;mule les sons de trois instruments diff&eacute;rents. Ils s'int&egrave;grent &eacute;galement aux logiciels DAW pour la composition et la modulation audio.</p>\r\n<p><strong>Presentation coordinator: Marco Bidin</strong></p>\r\n<p>Marco Bidin est compositeur, organiste et directeur artistique.</p>\r\n<p>Apr&egrave;s avoir obtenu son dipl&ocirc;me d'orgue, il a &eacute;tudi&eacute; la musique ancienne &agrave; Trossingen et la musique contemporaine &agrave; Stuttgart, o&ugrave; il a pass&eacute; l'examen de concert en composition et le certificat d'&eacute;tudes sup&eacute;rieures en informatique musicale sous la direction du professeur Marco Stroppa.</p>\r\n<p>Il a donn&eacute; des conf&eacute;rences, des cours et s'est produit en tant que soliste en Europe et en Asie, et ses compositions ont &eacute;t&eacute; jou&eacute;es en Allemagne, en France, au Portugal, en Italie, au Canada, en Cor&eacute;e du Sud, au Japon et en Chine.</p>\r\n<p>Marco Bidin est professeur associ&eacute; en ing&eacute;nierie des instruments &eacute;lectroniques au conservatoire de musique de Xinghai &agrave; Guangzhou, en Chine.</p>\r\n<p>Le d&eacute;partement d'ing&eacute;nierie des instruments &eacute;lectroniques a &eacute;t&eacute; cr&eacute;&eacute; en 2016, et c'est le plus jeune d&eacute;partement du d&eacute;partement d'ing&eacute;nierie des instruments de musique du Conservatoire de musique de Xinghai, et le d&eacute;partement a commenc&eacute; ses activit&eacute;s la m&ecirc;me ann&eacute;e. Dans le contexte de la plupart des entreprises de fabrication d'instruments de musique &eacute;lectroniques du pays situ&eacute;es dans le Guangdong, la naissance de la conception et de la recherche et d&eacute;veloppement d'instruments de musique &eacute;lectroniques est in&eacute;vitable.</p>\r\n<p>Apr&egrave;s des efforts inlassables ces derni&egrave;res ann&eacute;es, l'enseignement et la recherche scientifique du d&eacute;partement d'ing&eacute;nierie des instruments &eacute;lectroniques sont progressivement entr&eacute;s dans une phase de d&eacute;veloppement normalis&eacute;, syst&eacute;matique et scientifique.</p>\r\n<p>&Agrave; l'heure actuelle, les principaux cours dispens&eacute;s par le d&eacute;partement professionnel d'enseignement et de recherche sont les suivants : conception et production de synth&eacute;tiseurs/effets virtuels, conception et production de sources sonores virtuelles, conception et production de contr&ocirc;leurs/s&eacute;quenceurs MIDI, conception et production d'instruments de musique &eacute;lectroniques, conception de programmes et d'installations artistiques et APP interactifs, conception de prototypes d'instruments conceptuels.</p>\r\n<p>L'&eacute;cole compte quatre enseignants, principalement des jeunes et des personnes d'&acirc;ge moyen, tous titulaires d'une ma&icirc;trise ou d'un dipl&ocirc;me sup&eacute;rieur, et constitue une &eacute;quipe d'enseignants dont l'&acirc;ge et la structure acad&eacute;mique sont raisonnables et dont les orientations professionnelles sont diverses.</p>\r\n<p>Sur la base de l'apprentissage de la composition de musique &eacute;lectronique traditionnelle, cultiver des talents compos&eacute;s innovants dans la conception et le d&eacute;veloppement d'instruments de musique &eacute;lectroniques. Selon les objectifs de formation du cours, construire un mode d'apprentissage orient&eacute; coh&eacute;rent avec les int&eacute;r&ecirc;ts d'apprentissage professionnel des &eacute;tudiants et le contenu du sujet, un contenu d'apprentissage professionnel flexible et un apprentissage professionnel approfondi selon l'orientation de l'inscription nationale &agrave; grande &eacute;chelle, exploiter certaines forces des &eacute;tudiants et combiner des modules d'apprentissage pratiques, et mieux cultiver des talents de premier plan avec une capacit&eacute; d'innovation et une forte capacit&eacute; pratique ou une capacit&eacute; de recherche th&eacute;orique pour la soci&eacute;t&eacute;.</p>\r\n<p>L'orientation sp&eacute;cifique de la formation des talents est la suivante : entrer dans une entreprise/un studio de musique audio pour devenir un concepteur de plug-in de son virtuel ; entrer dans une entreprise Internet pour devenir un concepteur d'APP musical ; entrer dans une usine d'instruments de musique &eacute;lectroniques pour devenir un ing&eacute;nieur en d&eacute;veloppement sonore et en d&eacute;bogage ; &eacute;tablir des studios individuels pour d&eacute;velopper des sources sonores, des synth&eacute;tiseurs, des effets et des installations interactives pour des particuliers ou des groupes professionnels ; devenir un analyste de l'&eacute;chantillonnage de la mesure du son dans des institutions telles que l'Institut de recherche sur le son.</p>\r\n<p>Le Conservatoire de musique Xinghai est un &eacute;tablissement d'enseignement musical sup&eacute;rieur situ&eacute; dans la ville de Guangzhou, dans la province de Guangdong, en Chine. Il porte le nom du c&eacute;l&egrave;bre compositeur Xian Xinghai (chinois : 冼星海) et a &eacute;t&eacute; fond&eacute; en 1932 par le compositeur Ma Sicong sous le nom de Conservatoire de musique de Guangzhou.&nbsp;</p>\r\n<p><a href=\"https://www.xhcom.edu.cn/\">https://www.xhcom.edu.cn/</a></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1864,
                "name": " sampling",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20786,
            "forum_user": {
                "id": 20775,
                "user": 20786,
                "first_name": "Marco",
                "last_name": "Bidin",
                "avatar": "https://forum.ircam.fr/media/avatars/cv_pic.jpg",
                "avatar_url": "/media/cache/c8/12/c812194ab029dcbb2712b19a78eabf13.jpg",
                "biography": "Marco Bidin is a composer, artistic director, organist and harpsichord player from Italy.\n\nAfter his Organ degree in Italy, he studied Early Music performance in Trossingen and Contemporary Music performance in Stuttgart. Subsequently, under the guidance of Marco Stroppa, he completed the terminal degree (Konzertexamen) in Composition and the Certificate of Advanced Studies in Computer Music.\n\nMarco Bidin is active as an international composer and performer. He was invited in institutions like IRCAM (Paris, France), Shanghai Conservatory (China), Silpakorn University (Bangkok, Thailand) and Seoul National University (South Korea) among others.\n\nHe worked as a lecturer for Composition at the HMDK Stuttgart and as an organist for the Protestant Church in Stuttgart. 2010-2023 he was the artistic director of the italian-based NGO association ALEA. He is currently Associate Professor at the Electronic Instrument Engineering Department of the Xinghai Conservatory of Music in Guangzhou, China.",
                "date_modified": "2026-03-04T11:59:23.041276+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 988,
                        "forum_user": 20775,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    },
                    {
                        "id": 634,
                        "forum_user": 20775,
                        "date_start": "2023-11-16",
                        "date_end": "2024-11-16",
                        "type": 0,
                        "keys": [
                            {
                                "id": 155,
                                "membership": 634
                            },
                            {
                                "id": 406,
                                "membership": 634
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "mbalea",
            "first_name": "Marco",
            "last_name": "Bidin",
            "bookmarks": []
        },
        "slug": "group-presentation-eie-xinghai-conservatory-guangzhou-china",
        "pk": 2793,
        "published": true,
        "publish_date": "2024-03-04T06:25:08+01:00"
    },
    {
        "title": "DAFNE+ Workshop: Imagine a fair creative economy.",
        "description": "Part of the IRCAM forum workshop 2023",
        "content": "<p>In this participatory workshop, attendees will discover and co-create new business models for their creative ecosystems. Technologies such as Blockchain and NFTs have enabled new revenue models for artists. However, there is room to improve and imagine better ways to distribute profits so that every contributor gets a fair reward. DAFNE+ European project is developing technologies to support a fair ecosystem for content distribution. We aim to with communities such as IRCAM Forum to develop innovative and fair business models to support all actors, from artists and performers to developers and collectors. Participants will identify the most important problems for their communities, discover innovative economic models based on NFTs, and propose alternative solutions to create fairer creative economies.</p>",
        "topics": [
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1255,
                "name": "EU project",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 35,
                "name": "Meta-forum",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 40629,
            "forum_user": {
                "id": 40575,
                "user": 40629,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6a16f361ae5ef22ef9a40171d0d3475e?s=120&d=retro",
                "biography": "- Ámbar Tenorio-Fornés (they/she/he) is a free software developer and researcher. They are the founder and director of Decentralized Academy Ltd, and they lead the development of the blockchain-based software Decentralized Science (funded by LEDGER European Project) and Quartz Open Access (funded by Grant for The Web program). Their PhD studied decentralized governance tools for Commons-Based Peer Production communities. Their previous research and development experience includes participation in the European Projects P2P Models and P2Pvalue. They have been visiting researcher at the University of Surrey, the University of Westminster and Kozminski University. Their experience developing decentralized web tools includes Teem, SwellRT, Decentralized.science, and Quartz Open Access using technologies such as Blockchain and IPFS.",
                "date_modified": "2023-03-28T11:09:29+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ambar",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "dafne-workshop-imagine-a-fair-creative-economy",
        "pk": 2166,
        "published": true,
        "publish_date": "2023-03-29T15:52:38+02:00"
    },
    {
        "title": "«FERAL FREQUENCIES by Wilding AI» represented by Alexandre Saunier (FR/CH) and Maurice Jones (DE/CA)",
        "description": "FERAL FREQUENCIES is an AI-driven spatial sound composition developed and presented by the Wilding AI collective.",
        "content": "<p>Large Language Models (LLMs) are central to text-to-sound systems like text-to-speech and text-to-music, reshaping musical practices through prompt-based interaction while raising concerns about authenticity, automation, and the ethical use of artists&rsquo; work. Wilding AI is a public research-creation project bringing together artists, researchers, engineers, and students to explore speculative AI futures. In contrast to techno-solutionist uses of AI, Wilding AI treats LLMs as compositional tools rather than sound generators, integrating them into Max, Ableton Live, and spatial audio environments to control parameters such as 3D sound motion. This intervention presents the collective sound installation FERAL FREQUENCIES, which puts the system into action.</p>\n<p>Following a year-long research-creation process culminating in a two week residency at Laboratoire formes &middot; ondes at Universit&eacute; de Montr&eacute;al, FERAL FREQUENCIES demonstrates the aesthetic, technical, and practical implemention of the collective&rsquo;s developed capabilities in AI-driven sound spatialization. The composition traverses four key themes the collective explored: Emotional Sovereignty ; Data That Matters ; The Algorithmic Shape of Stories ; and Breaking Machines / Making Kin.&nbsp;</p>\n<p>The Wilding AI Collective consists of Beth Coleman, Maurice Jones, Alexandre Saunier, Portrait XO, Daniela Huerta, Sahar Homami, Debashis Sinha, Pia Baltazar, Nao Tokui, Gadi Sassoon, Heu Hsu, and Federico Visi.</p>\n<p>The residency and presentation of FERAL FREQUENCIES is supported by the &laquo; Laboratoire formes &middot; ondes &raquo; at Universit&eacute; de Montr&eacute;al.</p>\n<p>The development of FERAL FREQUENCIES at the Society for Arts and Technology is funded by the Minist&egrave;re de l'&Eacute;conomie, de l'Innovation et de l'&Eacute;nergie, in partnership with MA Sc&egrave;ne Nationale.</p>\n<p>The Wilding AI project is made possible by round 14 of the Goethe-Institut International Coproduction Fund, and supported by Concordia University, MONOM Studios, 4DSOUND, and Neutone Inc.</p>",
        "topics": [],
        "user": {
            "pk": 41877,
            "forum_user": {
                "id": 41821,
                "user": 41877,
                "first_name": "Alexandre",
                "last_name": "Saunier",
                "avatar": "https://forum.ircam.fr/media/avatars/00_ALEX_flipped_zoomed.jpg",
                "avatar_url": "/media/cache/95/58/9558a814b3dd83bbf0a73b95dcae29b2.jpg",
                "biography": "Alexandre Saunier is an artist, professor in the Audiovisual department at LUCA School of Arts, KU Leuven, and senior researcher in the Immersive Art Space at Zurich University of the Arts (ZHdK). With a deep interest in the theory and history of media arts, cybernetics, and complex systems theory, his work merges artistic practice with academic research, focusing on the interactions between light, sound, autonomous systems, and sensory perception.\nAlexandre holds a PhD from Concordia University (2023), where he studied the contemporary and historical practices of light as an artistic medium driven by real-time computational systems. His previous studies include mathematics and physics (CPGE, 2009), sound design and engineering (ENS Louis Lumière, 2012), and he was a fellow at ENSADLab, where he conducted research on behavioral robotics and interactive lighting (ENS Arts Décoratifs, 2015).\nAlexandre's artistic and research work is regularly presented at major international venues, including Mutek Montreal, Elektra BIAN, Festival Internacional de la Imagen, Ars Electronica, ISEA, Impakt Festival, MuffatHalle, Bcn_llum, ALIFE Conference, Media Art History, and Nuit Blanche Toronto.",
                "date_modified": "2025-11-10T11:40:29.040947+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "aaaallls",
            "first_name": "Alexandre",
            "last_name": "Saunier",
            "bookmarks": []
        },
        "slug": "feral-frequencies-by-wilding-ai-represented-by-alexandre-saunier-frch-and-maurice-jones-deca-1",
        "pk": 3871,
        "published": true,
        "publish_date": "2025-10-16T02:06:20.416518+02:00"
    },
    {
        "title": "Sounds of Living, Living Sounds: A Participatory Study of Domestic Sounds in Urban Japan",
        "description": "Presented during the IRCAM Forum @NYU 2022\r\n\r\nAt the intersection of the natural, mechanical, and digitally synthesised, what is the present experience of sound at home? How can present home soundscapes inform their future design?",
        "content": "<p>Sounds at home capture a complex assembly of everyday experiences. Composed of various physical characteristics processed by the human ear, they signify objects, gestures, and habits, subjective and collective. In recent years, the dispersion of intelligent objects and an accelerated digitalization of the home sphere are transforming home soundscapes, and novel interactive possibilities such as sonification and voice interfaces draw attention to the immense capacities of sound as a communicative medium. At the intersection of the natural, mechanical, and digitally synthesised, what is the present experience of sound at home? How can present home soundscapes inform their future design? In the presented exploratory research, humans living in urban Japan aged 25-82 filled a participatory &lsquo;sound diary&rsquo;, recording sounds of their domestic environments and documenting their experience of these sounds. A thematic analysis of the collected data illuminated domestic sounds&rsquo; situated qualities - their psychosocial significance, rhythmical nature, and sensory and interactive multimodalities. The initial results of this work in progress highlight the significance of a cross-disciplinary approach to sound design at home, consistent with the manifold holistic nature of the phenomenon. Such an approach may expose a path for collaboratively designing a future set of sounding artifacts resonating with present everyday experience.</p>",
        "topics": [],
        "user": {
            "pk": 31293,
            "forum_user": {
                "id": 31246,
                "user": 31293,
                "first_name": "Marine",
                "last_name": "Zorea",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/68bca8a3dc8ed1d804bfb4491f4739cc?s=120&d=retro",
                "biography": "Marine Zorea is a designer, researcher and artist based in Japan. ‏She currently examines the design of alternative sound interactions with intelligent objects at home as part of her PhD research at Kyoto Institute of Technology and Kyoto Design Lab. ‏She has collaborated with Japanese manufacturers and design consultancies and has shown her work and research in Japan and abroad. She holds a BA in psychology (Tel Aviv University) and MSc in product design (Kyoto Institute of Technology).",
                "date_modified": "2024-08-24T17:17:52.861202+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "marine-zo",
            "first_name": "Marine",
            "last_name": "Zorea",
            "bookmarks": []
        },
        "slug": "sounds-of-living-living-sounds-a-participatory-study-of-domestic-sounds-in-urban-japan",
        "pk": 1294,
        "published": true,
        "publish_date": "2022-09-05T14:41:41+02:00"
    },
    {
        "title": "Immersing the User by Jake Parry",
        "description": "\"Immersing the User\" explores immersion within the context of gambling media. In contemporary media culture, the growing emphasis on immersive experience and the convergence of artistic and commercial practice call for critical scrutiny. This talk examines the processes and techniques through which immersive states are produced, focusing on the functionalisation of sound as a material for modulating attention, sustaining engagement, and eliciting compulsive behaviours. By foregrounding ideological and cultural dimensions of immersion, the presentation reflects on how creative practice might subvert this paradigm, and how sound spatialisation technologies can be mobilised to do more than just immerse.",
        "content": "<p><span></span></p>\r\n<p><strong>➡️ This presentation is part of </strong><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\"><strong>IRCAM Forum Workshops Paris / Enghien-les-Bains March 2026</strong></a><span></span></p>\r\n<p><span>In contemporary media culture, immersive experiences are often presented as appealing qualitative states through which the user may escape the realities of everyday life via absorption in technological mediation. Within this framing, mediation becomes a prescriptive process in which audio-visual stimuli are assigned a functional role of masking, working to block out the external world whilst prioritising the private enclosure of the self (Hagood, 2019). As such, the aesthetics of immersion rest on a principle of subjective, internal perception that tends towards an affective solipsism, privileging individual sensation while restricting reflexive awareness of mediation itself (Schrimshaw, 2017).</span></p>\r\n<p><span>In the context of listening, these conditions work to sustain engagement with imminent sensory stimuli while simultaneously closing off the possibility for critical attention to the processes of production that structure the experience. From a behavioural science perspective, immersion is described as an intense focus on a specific, immediate activity in which attention to competing stimuli is diminished or suppressed (Murch et al., 2020). Such states are characterised by continuity and flow, limiting the listener&rsquo;s capacity to ask questions such as: how is this experience produced, and for what purposes? Immersive media frequently conceals the organisation of technological processes and material infrastructures, absorbing the listener within a bounded zone of attention and affect (Sch&uuml;ll, 2012).</span></p>\r\n<p><span>As immersive sound is increasingly adopted by commercial media platforms and marketed as a functional means of escape, the importance of interrogating the ideological and conceptual foundations of immersion becomes critical if artistic practice and technological innovation are to avoid uncritical corroboration. Against a backdrop of rapid technological development in spatial audio, this talk addresses convergences between art and product, examining how immersive strategies circulate between experimental practice and commercial design. Using gambling media as a case study, it highlights problematic features of the immersive paradigm, illustrating how sound in this context is carefully engineered to regulate attention, sustain engagement and elicit compulsive behaviours, while simultaneously obscuring the extractive mechanics that drive commercial profit.</span></p>\r\n<p><span>Beyond this analysis, the talk turns toward practices and technologies that actively subvert these problematised dynamics of immersion. By foregrounding mediation, disruption, and material process, it considers how spatial audio might be used not simply to intensify immersion but to expose, resist, or reconfigure the conditions under which immersive experiences are produced. In doing so, the presentation gestures toward a critical practice that moves beyond immersion as an unquestioned aesthetic goal. Practice where the listener&rsquo;s active criticality, interpretation and relation are emphasised over the user&rsquo;s passive envelopment, where the outside world is confronted and acted upon rather than </span><em><span>&ldquo;sinking into silence&rdquo;</span></em><span> (Bull, 2010, p. 56).</span></p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c4f57a498a5d82187d3afd40423883e7.jpg\" width=\"867\" height=\"867\" /></span></p>",
        "topics": [
            {
                "id": 4093,
                "name": "escapism",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4091,
                "name": "Gambling",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 303,
                "name": "Immersion",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4092,
                "name": "Media",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 158627,
            "forum_user": {
                "id": 158397,
                "user": 158627,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/vppphoto.jpg",
                "avatar_url": "/media/cache/3d/c5/3dc551beabc9a2f204250ebb409e4f17.jpg",
                "biography": "Jake Parry is a UK-based composer and researcher exploring critical approaches to sound spatialisation and listening. He is a PhD candidate at the MTI Research Centre (De Montfort University), focusing on immersion and its cultural, technological and material conditions.",
                "date_modified": "2026-02-28T20:20:22.565511+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jparry",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "immersing-the-user-by-jake-parry",
        "pk": 4270,
        "published": true,
        "publish_date": "2026-01-27T19:23:01+01:00"
    },
    {
        "title": "Somax version 2.7.0 is out!",
        "description": "This version introduces Max 9 compatibility, a redesigned user interface, multi-label corpus building, and new label handling for filtering and real-time control.",
        "content": "<h3><strong>Max 9 Compatibility</strong></h3>\r\n<p>Somax2 now runs on <strong>Max 9.0.3 or later</strong> (earlier versions of Max 9 contained a bug in <code>groove~</code> and <code>buffer~</code> that caused issues with corpus loading).</p>\r\n<h3><strong>Updated User Interface</strong></h3>\r\n<p>Somax2's interface has been redesigned and updated to the <strong>Max 9 color scheme</strong>, providing a refreshed and modern look. The interface looks now consistent in both Max 9 and Max 8. This applies to all the <code>somax.&lt;object&gt;.app</code> and the <code>somax.&lt;object&gt;.ui</code> objects.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8aa6b38dc7b546092119f63739af454e.png\" /></p>\r\n<h3><strong>New Features &amp; Enhancements</strong></h3>\r\n<ul>\r\n<li><strong>MFCC Support</strong> &ndash; <strong>Mel-Frequency Cepstral Coefficients (MFCC)</strong> are now included as atoms, analyzed and annotated during corpus building, and available as real-time influences in the Somax2 environment.</li>\r\n<li><strong>Multi-Label Corpus</strong> &ndash; Corpora can now include <strong>multiple segmentation labels</strong>, manually annotated in <strong>Reaper</strong> and <strong>Audacity</strong> and built in Somax2 from the text files exported from these DAWs. <br /><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4d092d5675c4dce253bb2a2dcddd5050.png\" /><br /><br />These labels allow:\r\n<ul>\r\n<li><strong>Label-Based Filtering</strong> &ndash; The new <code>somax.filter</code> object enables filtering a player's output based on specific incoming labels.<br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d023a5d910b475725a5ee61942f3461d.png\" /></li>\r\n<li><strong>Custom Atoms</strong> &ndash; The new <code>somax.atom</code> object treats custom labels as atoms, alongside pitch, chroma, and MFCC, allowing dynamic sequence matching. The <code>somax.atom.app</code> module extends this functionality with <strong>wireless</strong> visibility across the entire Somax2 environment.<br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d5d3687e467e95f6c02f7f8e0ca2ed33.png\" /></li>\r\n</ul>\r\n</li>\r\n<li><strong>Corpus Updater</strong> &ndash; The new <strong>Version Update</strong> in <code>somax.audiocorpusbuilder</code> lets users update old corpora (&le; v2.6.1) to the latest format.</li>\r\n<li><strong>Savestate Functionality</strong> &ndash; Players now support <strong>dynamic saving and loading of presets</strong> as <code>.json</code> files for user-defined configurations.</li>\r\n</ul>\r\n<h3><strong>Documentation</strong></h3>\r\n<p>The entire documentation package has been updated, including:</p>\r\n<ul>\r\n<li>New tutorials on <strong>saving/loading presets</strong> and <strong>building annotated corpora with custom labels</strong>.</li>\r\n<li>Updated <strong>Max help files</strong> and reference pages for new objects.</li>\r\n<li>A revised <strong>Somax2 User's Guide (PDF)</strong>.</li>\r\n<li>Template patchers (1&ndash;4 players) now support <strong>preset saving/loading</strong> for user-defined parameters.</li>\r\n</ul>\r\n<p>Goto to <a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">Somax2 Forum page</a> for installation</p>\r\n<p>See more at <a href=\"http://repmus.ircam.fr/somax2\">Somax2 Project Page</a> and <a href=\"https://reach.ircam.fr\">REACH website</a></p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1989,
                "name": "artificial intelligence",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2788,
                "name": "Improvisation, generativity and co-creative interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2184,
                "name": "RepMus",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "somax-version-270-is-out",
        "pk": 3370,
        "published": true,
        "publish_date": "2025-03-25T13:46:11+01:00"
    },
    {
        "title": "Elegy - Jinyu Fang, Yunsheng Zhu, Tairan Shi, Yutong Chai",
        "description": "Équipe de projet :  Jinyu Fang, Yunsheng Zhu, Tairan Shi, Yutong Chai\r\n\r\nÀ notre époque, nous observons un phénomène où les gens nuisent continuellement à l'environnement naturel, tout en exprimant leur admiration pour les paysages simulés artificiellement. Cette situation apparemment contradictoire révèle un problème profond, qui met en évidence l'écart considérable entre nos actions destructrices et nos idéaux pour la nature. En explorant la technologie, la science et la créativité, nous oublions souvent les graves problèmes auxquels notre planète est confrontée.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;Jinyu Fang, Yunsheng Zhu, Yutong Chai, Tairan Shi<br /><a href=\"https://forum.ircam.fr/profile/cygnuschai/\">Biography&nbsp;Yutong Chai<br /></a><a href=\"https://forum.ircam.fr/profile/jinyufang/\">Biography Jinyu Fang<br /></a><a href=\"https://forum.ircam.fr/profile/yunshengzhu/\">Biography Yunsheng Zhu</a></p>\r\n<p></p>\r\n<p>Nous visons &agrave; cr&eacute;er un espace narratif immersif, en utilisant la RV comme moyen de discuter de ce ph&eacute;nom&egrave;ne. Notre objectif est d'&eacute;voquer des souvenirs de paysages naturels qui ont &eacute;t&eacute; endommag&eacute;s en transmettant la beaut&eacute; inh&eacute;rente aux sons des d&eacute;chets, inspirant ainsi les gens &agrave; r&eacute;fl&eacute;chir &agrave; l'impact de nos modes de vie sur la Terre. Ces sons nous rappellent que nous devons &ecirc;tre responsables de nos actes. Au fil de la narration, les spectateurs entreront dans l'ann&eacute;e 3030 &agrave; la premi&egrave;re personne, o&ugrave; les sons naturels ont disparu, et participeront &agrave; la restauration de ces sons. Dans le monde virtuel que nous avons cr&eacute;&eacute;, chaque son devient une note de deuil, un po&egrave;me sur la beaut&eacute; de la nature et la douleur de sa destruction. Nous esp&eacute;rons &eacute;veiller la conscience des gens et les motiver &agrave; r&eacute;examiner leur lien avec le monde naturel, en s'effor&ccedil;ant de mieux prot&eacute;ger notre maison commune.</p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/88c8c3ae6e02f3ba5a91139ea18a3df3.jpg\" /></p>\r\n<p>Techniquement, nous avons utilis&eacute; Blender et Cinema 4D pour la mod&eacute;lisation 3D, Unreal engine 5 pour la construction des sc&egrave;nes et la conception des interactions, et Reaper, Adobe Audition et Fl studio pour la conception sonore.</p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/917bf0698aca66ebfb5e5d8097d3030c.png\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/507a89faa9ad8c693b668d60b532dc5b.png\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/966d2292f9b3c1031e6f6edef7d46947.png\" /></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1890,
                "name": "Immersive narrative",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1891,
                "name": "Rubbish Production Sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 301,
                "name": "Virtual reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55029,
            "forum_user": {
                "id": 54967,
                "user": 55029,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/69079227b5a6ac148b67cd54acaee837?s=120&d=retro",
                "biography": "I am JinYu Fang, a designer who is passionate about cross-creative design fields. My design philosophy has always been to focus on human emotions and psychological issues, and to convey the power of emotions and thoughts expressed through design. Currently, I am pursuing a degree in Digital Direction at the Royal College of Art, a field that offers me more opportunities to combine creativity with technology.\nIn my previous work experience, I have been actively involved in various design projects as a visual designer. I was fortunate enough to work with a number of well-known brands, providing innovative solutions to their design needs. These collaborative experiences have provided me with the opportunity to challenge myself, continually improve my design skills and apply my creativity to different project areas.\nI believe design is a powerful communication tool that can change perceptions, trigger emotions and stimulate thinking. I will continue to explore the possibilities of cross-media design and pursue the perfect blend of creativity and technology to create more impactful design works.",
                "date_modified": "2024-08-30T17:55:51.920416+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jinyufang",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "elegy",
        "pk": 2826,
        "published": true,
        "publish_date": "2024-03-12T21:42:56+01:00"
    },
    {
        "title": "This bitter sweet thing - Thomas Bugg",
        "description": "Cette chose douce et amère est une exploration artistique de l'ascension qui plonge dans l'interaction complexe de la lumière, du son et de la technologie, nous invitant à contempler notre place dans le tissu complexe de l'existence.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par : Thomas Bugg<br /><a href=\"https://forum.ircam.fr/profile/tjabugg/\">Biographie</a></p>\r\n<p></p>\r\n<p>This bitter sweet thing\" est une exploration artistique de l'ascension qui plonge dans l'interaction complexe de la lumi&egrave;re, du son et de la technologie, nous invitant &agrave; contempler notre place dans le tissu complexe de l'existence.</p>\r\n<p>Au c&oelig;ur de \"This bitter sweet thing\" se trouve une exp&eacute;rience sensorielle qui se d&eacute;ploie par couches, en commen&ccedil;ant par un paysage sonore qui enveloppe le public pendant les six premi&egrave;res minutes dans une obscurit&eacute; totale. Ce choix d&eacute;lib&eacute;r&eacute; plonge les spectateurs dans une m&eacute;ditation sur l'existence conflictuelle de l'humanit&eacute; et de leur propre personne dans ce monde.</p>\r\n<p>Cette chose douce et am&egrave;re nous invite &agrave; nous confronter aux complexit&eacute;s de notre relation avec la technologie, &agrave; nous d&eacute;battre avec les cons&eacute;quences de nos choix et &agrave; r&eacute;&eacute;valuer le sens de l'ascension &agrave; une &eacute;poque marqu&eacute;e par l'entropie. C'est un t&eacute;moignage du pouvoir de l'art de provoquer la r&eacute;flexion, d'&eacute;voquer l'&eacute;motion et d'inspirer le changement.</p>\r\n<p>Pr&eacute;c&eacute;demment expos&eacute;e &agrave; l'Iklectik Art Lab de Londres et soutenue par l'Arts Council Korea.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 54500,
            "forum_user": {
                "id": 54438,
                "user": 54500,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_0848.JPG",
                "avatar_url": "/media/cache/98/80/988087b0029b1bcd8d01a7de1334843e.jpg",
                "biography": "Thomas Bugg is a multidisciplinary artist and designer. Through installation, moving image, sound and performance, his work delves into the subtle and often overlooked aspects of human existence, considering the ephemeral and intangible elements that shape our lives and influence our connection to the world.\n\nCurrently pursuing the MA Information Experience Design programme at the Royal College of Art, Thomas previously earned his BA (Hons) in Graphic Communication Design (with Creative Computing) from Central Saint Martins in 2021. \n\nBeyond his personal practice, Thomas has taught on the MA Biodesign programme at Central Saint Martins. His professional engagements include projects for notable clients such as Nike, Arena Homme +, Snap Inc, Kandinsky Theatre and with institutions like the Royal College of Art, The Open Data Institute and Central Saint Martins.\n\nHis work has been previously exhibited at Iklectik Art Lab, Corner 7 Gallery and the Gerald Moore Gallery. His work has been published by the likes of Thames & Hudson, Design Week, People of Print and Campaign and has been shortlisted for the LVMH Maison/0 Green Trail Award.",
                "date_modified": "2024-03-27T15:49:26.300413+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "tjabugg",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "this-bitter-sweet-thing",
        "pk": 2791,
        "published": true,
        "publish_date": "2024-03-03T20:22:49+01:00"
    },
    {
        "title": "« liberated frequencies (short demo ver.) » by Keigo Yoshida (Japan)",
        "description": "\"liberated frequencies\" redefines auditory pleasure by freeing AI from human-centric aesthetics. In advance, glitch, noise, voice, and experimental sounds were rated by a subject based on perceived pleasure. During the demo, AI learns from the most highly rated sounds and generates evolving soundscapes.\r\n  \r\nThe subject wears EEG sensors measuring theta waves (4–8 Hz), linked to auditory pleasure. When brain activity indicates increased pleasure, the AI disrupts it—altering pitch, tempo, and rhythm to deviate from the subject’s preferences. This creates a feedback loop that challenges the boundaries of comfort, asking whether such “liberated” sounds disturb or expand our auditory experience.\r\n\r\nThis work will be presented as a demo at the IRCAM Forum Workshops Taipei 2025.",
        "content": "<p></p>\r\n<p><em>liberated frequencies - </em>explores unprecedented soundscapes that defy our traditional auditory pleasures by \"liberating\" AI from the limitations of human-defined &lsquo;pleasing'.<br />Before the production, our team gathered glitch, experimental, voice and noise sounds, which a subject later rated based on the pleasure they evoked. During demo, the AI continuously learns in real-time from the highest-rated sounds. Utilizing this sound data, the AI predicts and generates the subsequent auditory experiences, creating an evolving and immersive soundscape.<br />The subject in the soundscape wears EEG sensors that measure real-time theta waves (4-8 Hz) of brain activity.&nbsp; According to Sammler et al. (2007), increased activity in this frequency band is typically associated with intensified auditory pleasure. However, in response to this heightened brain-based pleasure, the AI&mdash;continuously learning from the real-time EEG data&mdash;intentionally disrupts the experience. It transforms the generated sounds, subtly altering pitches, waveforms, tempos and syncopations, gradually diverging from the original sound patterns the subject found pleasurable.</p>\r\n<p>This deliberate shift invites the viewer to explore the boundaries of discomfort, challenging the conventional auditory aesthetics inherently favored by human perception. Do these deliberately 'liberated' sounds merely traumatize the human senses, or do they open a gateway to new auditory expressions and possibilities?</p>\r\n<p>github: <a href=\"https://github.com/keigoyoshida7/liberated-frequencies\" title=\"liberated frequencies\">https://github.com/keigoyoshida7/liberated-frequencies</a></p>\r\n<p>HP: <a href=\"https://keigoyoshida.jp/room20.html\" title=\"liberated frequencies\">https://keigoyoshida.jp/room20.html</a></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3914b9341c67554e9cb7e2de1a08953a.png\" /><br /><br /></p>\r\n<p></p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/fb52737e57fa7d2f92842d524b502eb4.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>",
        "topics": [
            {
                "id": 3462,
                "name": "AI & Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3465,
                "name": "auditory pleasure",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3463,
                "name": "EEG",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3461,
                "name": "Improvised Generative Music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3464,
                "name": "Theta wave",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 122344,
            "forum_user": {
                "id": 122180,
                "user": 122344,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/press-photo.jpg",
                "avatar_url": "/media/cache/d4/16/d416b58e1caadfe8dea60bd812255263.jpg",
                "biography": "Keigo Yoshida is an artist and scientist affiliated with Center for Music Neuroscience at Keio University Graduate school of Media and Governance. He explores music through the perspectives of neuroscience and computer science as machine learning, integrating insights into various forms of artistic expression, including audiovisual works, installations, and musical compositions.\n\nHis notable works include Propagation (A/V performance), Mineral Neurons (A/V performance) at Sónar+D, liberated frequencies (A/V performance and installation in collaboration with METI and Rhizomatiks), Reservoir Audio Visual Performance (presented at TEDx KeioU Conference), and Artificial Heart Brain (a project from Keio University's Data-Driven Class, Daito Manabe Grand Prize). Additionally, he worked on Hanamizuki Reworked, feat. Yo Hitoto.\nAs a VJ, he performs in Radio Sakamoto Uday for SE SO NEON and TOWA TEI.\n\nBeyond his creative endeavors, he has actively contributed to the field of music neuroscience. He performed an AI-driven showcase at Tsukuba Conference For Future Shapers 2023 and presented his research at The Neurosciences and Music - VIII in Helsinki, Finland.",
                "date_modified": "2026-03-02T06:52:37.972899+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1439,
                        "forum_user": 122180,
                        "date_start": "2026-03-16",
                        "date_end": "2027-03-16",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "keigoyoshida",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3750,
                    "user": 122344,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4366,
                    "user": 122344,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4367,
                    "user": 122344,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4147,
                    "user": 122344,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "liberated-frequencies-short-demo-ver-by-keigo-yoshida-japan",
        "pk": 3750,
        "published": true,
        "publish_date": "2025-10-03T10:41:45+02:00"
    },
    {
        "title": "Partiels - Exploring, Analyzing and Understanding Sounds by Pierre Guillot",
        "description": "",
        "content": "<div>In this talk, Pierre Guillot will give a brief introduction to the historical heritage and artistic and research context in which Partiels is developed, highlighting the challenges and innovative nature of the project. We will then present the possibilities offered by this suite of tools and discuss the prospects for further developments and improvements. Partiels is an audio analysis application and collection of plug-ins that lets you analyze one or more audio files using Vamp plug-ins, load data files, visualize, edit, organize and export results as images or text files that can be used in other applications such as Max, Pure Data, Open Music and more. In parallel with Partiels, a set of analyses are ported to Ircam's Vamp plug-ins: SuperVP, IrcamBeat, IrcamDescriptor, PM2, FCN, Crepe, Whisper. These plug-ins enable FFT, LPC, transient, fundamental, formant, tempo, STT, and other analyses.&nbsp;</div>\r\n<div></div>\r\n<div><a href=\"https://forum.ircam.fr/projects/detail/partiels/\">https://forum.ircam.fr/projects/detail/partiels/</a><a href=\"https://forum.ircam.fr/projects/detail/partiels/\"></a></div>\r\n<div></div>\r\n<div><img src=\"/media/uploads/partiels-v2.0.0-sample-v2.gif\" alt=\"\" width=\"640\" height=\"414\" /></div>",
        "topics": [],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "partiels-exploring-analyzing-and-understanding-sounds-by-pierre-guillot",
        "pk": 3076,
        "published": true,
        "publish_date": "2024-10-25T11:15:31+02:00"
    },
    {
        "title": "Sound design - specific tools and methods to design the sound",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Assuming that sound design could be understood as making design with sound, the sound design discipline needs to develop specific tools and methods to deal with that specific matter of design, that is sound. From this point of view, one crucial point could address the issue of sketching sounds with the help of efficient representations or media. Then, through collaborative research projects or industrial applied works, we (the Ircam STMS Lab Sound Perception &amp; Design group) have been trying to develop, for several years, tools and environments to implement sonic sketching paradigms.  One of them, a sound lexicon called Speak, was initially developed &ndash;&nbsp;and is still a work in-progress &ndash; to allow the definition of semantic portrait on the basis of basic sonic properties, their definition and their illustrations with mastered sound examples. This tool was precisely used in a 2-year industrial collaboration within the wine industry to support a collaborative sound design  approach in order to interpret and translate oenoogical characters into sonic properties, and in fine in oder design an augmented experience of wine tasting. This long term collaboraiton was also the opportunity to instantiate a singular artistic/scientific articulation by embedding a composer, Roque Rivas, at the very beginning of the project and give him all the necessary and possible means to realise an informed sound design piece related to wine typologies and features. The talk will globally present this applied research, its conceptual and operational tooling and its main musical outcomes.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">Assuming that sound design could be understood as making design with sound, the sound design discipline needs to develop specific tools and methods to deal with that specific matter of design, that is sound. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Assuming that sound design could be understood as making design with sound, the sound design discipline needs to develop specific tools and methods to deal with that specific matter of design, that is sound. From this point of view, one crucial point could address the issue of sketching sounds with the help of efficient representations or media. Then, through collaborative research projects or industrial applied works, we (the Ircam STMS Lab Sound Perception &amp; Design group) have been trying to develop, for several years, tools and environments to implement sonic sketching paradigms.  One of them, a sound lexicon called Speak, was initially developed &ndash;&nbsp;and is still a work in-progress &ndash; to allow the definition of semantic portrait on the basis of basic sonic properties, their definition and their illustrations with mastered sound examples. This tool was precisely used in a 2-year industrial collaboration within the wine industry to support a collaborative sound design  approach in order to interpret and translate oenoogical characters into sonic properties, and in fine in oder design an augmented experience of wine tasting. This long term collaboraiton was also the opportunity to instantiate a singular artistic/scientific articulation by embedding a composer, Roque Rivas, at the very beginning of the project and give him all the necessary and possible means to realise an informed sound design piece related to wine typologies and features. The talk will globally present this applied research, its conceptual and operational tooling and its main musical outcomes.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">From this point of view, one crucial point could address the issue of sketching sounds with the help of efficient representations or media. Then, through collaborative research projects or industrial applied works, we (the Ircam STMS Lab Sound Perception &amp; Design group) have been trying to develop, for several years, tools and environments to implement sonic sketching paradigms. One of them, a sound lexicon called Speak, was initially developed &ndash;&nbsp;and is still a work in-progress &ndash; to allow the definition of semantic portrait on the basis of basic sonic properties, their definition and their illustrations with mastered sound examples. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Assuming that sound design could be understood as making design with sound, the sound design discipline needs to develop specific tools and methods to deal with that specific matter of design, that is sound. From this point of view, one crucial point could address the issue of sketching sounds with the help of efficient representations or media. Then, through collaborative research projects or industrial applied works, we (the Ircam STMS Lab Sound Perception &amp; Design group) have been trying to develop, for several years, tools and environments to implement sonic sketching paradigms.  One of them, a sound lexicon called Speak, was initially developed &ndash;&nbsp;and is still a work in-progress &ndash; to allow the definition of semantic portrait on the basis of basic sonic properties, their definition and their illustrations with mastered sound examples. This tool was precisely used in a 2-year industrial collaboration within the wine industry to support a collaborative sound design  approach in order to interpret and translate oenoogical characters into sonic properties, and in fine in oder design an augmented experience of wine tasting. This long term collaboraiton was also the opportunity to instantiate a singular artistic/scientific articulation by embedding a composer, Roque Rivas, at the very beginning of the project and give him all the necessary and possible means to realise an informed sound design piece related to wine typologies and features. The talk will globally present this applied research, its conceptual and operational tooling and its main musical outcomes.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">This tool was precisely used in a 2-year industrial collaboration within the wine industry to support a collaborative sound design approach in order to interpret and translate oenoogical characters into sonic properties, and in fine in oder design an augmented experience of wine tasting. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Assuming that sound design could be understood as making design with sound, the sound design discipline needs to develop specific tools and methods to deal with that specific matter of design, that is sound. From this point of view, one crucial point could address the issue of sketching sounds with the help of efficient representations or media. Then, through collaborative research projects or industrial applied works, we (the Ircam STMS Lab Sound Perception &amp; Design group) have been trying to develop, for several years, tools and environments to implement sonic sketching paradigms.  One of them, a sound lexicon called Speak, was initially developed &ndash;&nbsp;and is still a work in-progress &ndash; to allow the definition of semantic portrait on the basis of basic sonic properties, their definition and their illustrations with mastered sound examples. This tool was precisely used in a 2-year industrial collaboration within the wine industry to support a collaborative sound design  approach in order to interpret and translate oenoogical characters into sonic properties, and in fine in oder design an augmented experience of wine tasting. This long term collaboraiton was also the opportunity to instantiate a singular artistic/scientific articulation by embedding a composer, Roque Rivas, at the very beginning of the project and give him all the necessary and possible means to realise an informed sound design piece related to wine typologies and features. The talk will globally present this applied research, its conceptual and operational tooling and its main musical outcomes.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">This long term collaboraiton was also the opportunity to instantiate a singular artistic/scientific articulation by embedding a composer, Roque Rivas, at the very beginning of the project and give him all the necessary and possible means to realise an informed sound design piece related to wine typologies and features. The talk will globally present this applied research, its conceptual and operational tooling and its main musical outcomes.</span></p>",
        "topics": [],
        "user": {
            "pk": 115,
            "forum_user": {
                "id": 115,
                "user": 115,
                "first_name": "Nicolas",
                "last_name": "Misdariis",
                "avatar": "https://forum.ircam.fr/media/avatars/myPhoto_CR.JPG",
                "avatar_url": "/media/cache/2c/cd/2ccdde6a292f0a0054c61094af3111b8.jpg",
                "biography": "I am a research director, head of Ircam STMS Lab / Sound Perception & Design group, and presently deputy-head of the Ircam STMS Lab. I am graduated from an engineering school specialized in mechanics (1993), I got my Master thesis on applied acoustics and my PhD on synthesis/reproduction/perception of musical and environmental sounds. I defended, some years ago, my HDR (Habilitation to Direct Research) on the topic of Sciences of Sound Design. I have been working at Ircam as a research fellow since 1995 and contributed, in 1999, to the introduction of sound design in the Institute. During that time, I developed research works and industrial applications related to sound synthesis and reproduction, environmental sound and soundscape perception, auditory display, human-machine interfaces (HMI), interactive sonification and sound design. Since 2010, I am also a regular lecturer in the Sound Design Master at the High School of Art and Design in Le Mans (ESAD TALM, Le Mans).",
                "date_modified": "2026-03-02T12:04:38.503876+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 259,
                        "forum_user": 115,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "misdarii",
            "first_name": "Nicolas",
            "last_name": "Misdariis",
            "bookmarks": []
        },
        "slug": "sound-design-specific-tools-and-methods-to-design-the-sound",
        "pk": 1335,
        "published": true,
        "publish_date": "2022-09-13T12:57:05+02:00"
    },
    {
        "title": "Vase by Yuval Seeberger",
        "description": "Electroacoustic sound installation for Motorized Music-box with MAX/MSP and RAVE models (2025)",
        "content": "<p><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p><strong>System Architecture and Sonic Organization<br /><br /></strong><em>Vase</em> is a motorized music-box controlled by a MAX/MSP patch, emitting a semi-algorithmic composition for MAX/MSP, RAVE models, and perforated paper scores. Approximately 12-minute loop; consists of a wooden resonance box, aluminum cage, Arduino board, Piezo pickup, magnetic rail-coil pickup, 12v motor, and a punched paper strip.</p>\r\n<p><em>Vase</em> employs algorithmic techniques, as it balances formal organization with an unpredictable musical progression. Due to the irregular correlation between the computer and the music-box, each loop-cycle deviates slightly from what was supposed to be its identical repetition, resulting in subtle variations. With the aim of creating sonic layering, four primary elements were selected for the overall configuration: an acoustic-mechanical music-box; an analog motor; sound processing and synthesis in Max/MSP; and RAVE real-time neural audio synthesis models (isis.ts from IRCAM Team and mbox2.ts self trained baesed on MIDI Music-box database). These elements form a fixed \"ensemble of performers,\" each introducing distinct sonic characteristics into the linear-spatial composition.<br /><br /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3967d4ebc5c9b2e528ca4753bbd36cab.jpg\" /></p>\r\n<p><span>Photo: Hannah Franke, <em>Vase</em></span></p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/66e80c4adebe28ef12e87e219b4d0e9a.png\" /></p>\r\n<p><span>Max/MSP patch for </span><em><span>Vase<br /><br /></span></em></p>\r\n<p><span><strong>Further applications of <em>Vase</em></strong></span></p>\r\n<p>For live performance with <em>Vase</em>, a custom MAX/MSP patch was made to enable real-time control of the music-box via a MIDI controller. The interface allows continuous control over motor speed, live mixing of the system's sonic components, <br />and a dynamic manipulation of algorithmic parameters.</p>\r\n<p><br /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6b7f41a110ab4e02f329fe72367ed87c.png\" /><br /><span>Live controll patch for the music-box&nbsp;</span></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Credits:<br /></strong>RAVE (Real-Time Audio Variational Autoencoder) &mdash; developed by the ACIDS team, IRCAM, Paris.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 1758,
                "name": "algorithmic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3239,
                "name": "electroacoustic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 66638,
            "forum_user": {
                "id": 66568,
                "user": 66638,
                "first_name": "Yuval",
                "last_name": "Seeberger",
                "avatar": "https://forum.ircam.fr/media/avatars/PHOTO-2024-11-21-16-19-33.jpg",
                "avatar_url": "/media/cache/f1/32/f1321aa37305c258f50acd2e5986a7ef.jpg",
                "biography": "Composer Yuval Seeberger (b. 1996, based in Leipzig), began his musical journey with rigorous instrumental compositions before venturing into computer based music. His education includes HMTM Munich under Prof. Moritz Eggert, the Jerusalem Academy of Music and Dance under Prof. Amnon Wolman, Ircam under ACIDS team and currently is studying his M.Mus at the Hochschule für Musik in Dresden under Prof. Stefan Prins. Among his accomplishments are two composition prizes after Mark Kopytman, a Dean Award and ballet composition performed at the 9. Biennale Tanzausbildung and at the Bayerische Staatsoper. Alongside an exploration of new possibilities in the use of machine learning and algorithmic technologies, Seeberger’s works often present engulfment, density, suffocation, unpredictably, and an almost physical approach to music.",
                "date_modified": "2026-02-09T17:25:32.498179+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 957,
                        "forum_user": 66568,
                        "date_start": "2024-10-08",
                        "date_end": "2025-10-08",
                        "type": 0,
                        "keys": [
                            {
                                "id": 591,
                                "membership": 957
                            },
                            {
                                "id": 790,
                                "membership": 957
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "yuvalseeberger",
            "first_name": "Yuval",
            "last_name": "Seeberger",
            "bookmarks": []
        },
        "slug": "vase-sound-installation-by-yuval-seeberger",
        "pk": 4320,
        "published": true,
        "publish_date": "2026-02-05T18:58:59+01:00"
    },
    {
        "title": "Dicy2 Tutorials",
        "description": "This page gathers video tutorials on Dicy2 for Max and Dicy2 for Live.",
        "content": "<h3>An intro to Dicy2:</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"https://medias.ircam.fr/embed/media/x6ddbd2_dicy2\"></iframe></p>\r\n<h3>Tutorial #000: Dicy2: Introduction</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/xt8-rlqMIQM\"></iframe></p>\r\n<h3>Tutorial #001: Dicy2 for Max - Concepts</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/Z489icU_ZDs\"></iframe></p>\r\n<h3>Tutorial #002: Dicy2 for Live</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/6pE-HRT4fN0\"></iframe></p>\r\n<h3>Tutorial #003: Dicy2: Audio Interactions</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/Pejw5IWqLm8\"></iframe></p>\r\n<h3>Tutorial #004: Dicy2:&nbsp;Performance Strategies</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/boq8znuPDu0\"></iframe></p>\r\n<h3>Tutorial #005: Dicy2:&nbsp;Agents and Scenarios</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/G25HzBKSdy0\"></iframe></p>\r\n<h3>Tutorial #006: Dicy2:&nbsp;Chaining Agents</h3>\r\n<p><iframe width=\"425\" height=\"350\" style=\"width: 50% !important;\" src=\"//www.youtube.com/embed/otj2tACKjxM\"></iframe></p>\r\n<p></p>\r\n<h3>Patches and audio files available in the package.</h3>\r\n<p><a href=\"https://forum.ircam.fr/projects/detail/dicy2/\">https://forum.ircam.fr/projects/detail/dicy2/</a></p>\r\n<p></p>\r\n<p><span style=\"font-weight: 400;\">Dicy2 by J&eacute;r&ocirc;me Nika, Augustin Muller, and Joakim Borg. Dicy2 is both a package for Max and a plugin for Ableton Live by Ircam's Musical Representations team.</span></p>\r\n<p><span style=\"font-weight: 400;\">Contributions, tutorial patchers, documentation, and tutorial videos by Matthew Ostrowski. Max for Live plugin by Manuel Poletti.&nbsp;</span></p>\r\n<p><span style=\"font-weight: 400;\">Developed in the framework of the projects ANR-DYCI2, ANR-MERCI, ERC-REACH directed by G&eacute;rard Assayag, and the UPI-CompAI Ircam project.</span></p>\r\n<p></p>\r\n<p><span style=\"font-weight: 400;\">The audio use cases have been designed and developed with Diemo Schwarz and Riccardo Borghesi, and use the MuBu and CatArt environments of the ISMM team of Ircam. Contributions / thanks: Serge Lemouton, Jean Bresson, Thibaut Carpentier, Georges Bloch, Mikha&iuml;l Malt, Axel Chemla-Romeu-Santos, Tristan Carsault, Vincent Cusson, Tommy Davis, Dionysios Papanicolaou, Greg Beller, Markus Noisternig.</span></p>\r\n<p><br /><span style=\"font-weight: 400;\">Related research article: </span><span style=\"font-weight: 400;\"><br /></span><span style=\"font-weight: 400;\">Nika, J&eacute;r&ocirc;me, et al. \"DYCI2 agents: merging the\" free\",\" reactive\", and\" scenario-based\" music generation paradigms.\" </span><i><span style=\"font-weight: 400;\">International computer music conference</span></i><span style=\"font-weight: 400;\">. 2017.</span></p>",
        "topics": [
            {
                "id": 1036,
                "name": "DICY2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 545,
                "name": "Repmus team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18367,
            "forum_user": {
                "id": 18360,
                "user": 18367,
                "first_name": "Jerome",
                "last_name": "Nika",
                "avatar": "https://forum.ircam.fr/media/avatars/jerome_nika-466x233.jpg",
                "avatar_url": "/media/cache/f2/20/f220de2bc73567220b06bd17faf4baa1.jpg",
                "biography": "As a researcher at Ircam, Jérôme Nika’s work focuses on how to model, learn, and navigate an “artificial musical memory” in creative contexts. In opposition to a “replacement approach” where AI would substitute for human, this research aims at designing novel creative practices involving a certain level of symbolic abstraction such as “interpreting / improvising the intentions” and “composing the narration“. \nNumerous productions have the resulting technologies: Roulette, NYC; Onassis Center, Athens; Ars Electronica Festival, Linz; Frankfurter Positionen festival; Annenberg Center, Philadelphia; Bimhuis, Amsterdam; French embassy Washington DC; Maison de la Radio, Centre Pompidou, Collège de France, LeCentquatre, Paris; Montreux Jazz Festival; Montreal Jazz Festival etc.\nAs a musician, computer music designer, or scientific advisor, he is involved in numerous musical productions and artistic collaborations, particularly in improvised music (Steve Lehman, Orchestre National de Jazz, Bernard Lubat, Benoît Delbecq, Rémi Fox), contemporary music (Pascal Dusapin, Alexandros Markeas, Ensemble Modern, Marta Gentilucci), and contemporary art (Le Fresnoy).",
                "date_modified": "2026-02-23T11:56:29.425335+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 644,
                        "forum_user": 18360,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 448,
                                "membership": 644
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "jnika",
            "first_name": "Jerome",
            "last_name": "Nika",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2757,
                    "user": 18367,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dicy2-tutorials",
        "pk": 1993,
        "published": true,
        "publish_date": "2022-12-06T15:52:23+01:00"
    },
    {
        "title": "Música Visual Electrónica. Elementos de Creación Audiovisual de Dave Payling",
        "description": "La obra es un marco de referencia para comprender formas artísticas que relacionan las cualidades musicales de los sonidos y las imágenes en una sola creación, en total armonía. De acuerdo con el autor, la Música Visual Electrónica es una forma creativa y  un  proceso  que  ha  sido  poco  estudiado  por  los  historiadores  del  arte,  debido  a  que  se  trata  de  un  fenómeno  relativamente  reciente.  Sin  embargo,  su  desarrollo  ha sido estable, diverso y productivo, teniendo su mayor auge durante el siglo XX. Asimismo, su fuerza creativa dejó una estela que se ha extendido al siglo XXI, impul-sada por las nuevas tecnologías electrónicas.",
        "content": "<p><a href=\"https://revistas.unal.edu.co/index.php/estetica/article/view/124699/96334\" title=\"Electronic Visual Music. The Elements of Audiovisual Creativity, by Dave Payling\">https://revistas.unal.edu.co/index.php/estetica/article/view/124699/96334</a></p>",
        "topics": [],
        "user": {
            "pk": 151522,
            "forum_user": {
                "id": 151304,
                "user": 151522,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/943033615147913836ba0641fe23cddb?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-12-20T05:20:26.608698+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alexcasales",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "musica-visual-electronica-elementos-de-creacion-audiovisual-de-dave-payling",
        "pk": 4095,
        "published": false,
        "publish_date": "2025-12-20T05:19:51.909817+01:00"
    },
    {
        "title": "Decorrelated Spatial Synthesis and OVERTON synthesizer by Martin ANTIPHON",
        "description": "Overton is an instrument inspired by classic synthesizers, enhanced by a 3D audio engine and Decorrelated Spatial Synthesis.\r\nIt offers musicians a convenient tool to easily create sounds in 3D audio.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1868e9febea8509f2a53a9fad05bc00e.png\" width=\"1013\" height=\"437\" /></p>\r\n<p>Presented by : Martin Antihpon</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/MartinAntiphon/\" target=\"_blank\">Biography</a></p>\r\n<p>Decorrelated Spatial Synthesis involves the addition of classical synthesizer parts to spatial coordinates, and establishes a correlation between synthesis parameters and spatial positions. For each polyphony voice, each section of the synthesis (e.g. oscillators, filters, amplifiers, envelopes) is multiplied and becomes a new entity: VCSOs (with an &laquo; S &raquo; for &laquo; Spatial &raquo;) are Voltage Controlled Spatial Oscillators, and contain multiple VCOs. These entities are controlled by two new high-level entities called VCSS (for Voltaged Controlled Spatial Spread) and SCS fo (Spatial Coordinates Synthesis).</p>\r\n<p>Overton is an instrument that uses this particular synthesis model. Programmed with Max, it uses spat5.&nbsp;&nbsp;</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 276,
                "name": "Spat 5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2567,
                "name": "synth",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1779,
                "name": "Synthesizer",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1021,
            "forum_user": {
                "id": 1021,
                "user": 1021,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PortraitMU_Martin_Antiphon.jpg",
                "avatar_url": "/media/cache/32/34/3234bcf828a4be0f8a1b4026963834e4.jpg",
                "biography": "Sound engineer, 3D audio designer, producer and composer, Martin Antiphon is leaving his position as sound manager at IRCAM in 2010 to join the Music Unit team. He already has numerous studio collaborations to his credit with Ibrahim Maalouf, Balake Sissoko, Rone or Vanessa Wagner, as well as concerts throughout Europe as a live electronic performer for Kaija Saariaho, Sivan Eldar and Sebastian Rivas. On the strength of his mastery of traditional mixing techniques and spatial audio technologies, Martin is now working on converging his skills in the field of immersive audio. He is currently CTO of Music Unit, within wich he has developed a patented 3D audio synthesiser. However Martin continues to create and recently inaugurated his first sound installation, Lo Parlament, in his home town of Pau.\nSince 2022, Martin is vice-president of the French section of the AES.",
                "date_modified": "2026-02-25T17:51:20.352692+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": true,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 486,
                        "forum_user": 1021,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "MartinAntiphon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "decorrelated-spatial-synthesis-and-overton-synthesizer-by-martin-antiphon",
        "pk": 3229,
        "published": true,
        "publish_date": "2025-01-27T16:31:20+01:00"
    },
    {
        "title": "\"Rewriting the Score in 360°: Immersive Sound as a Creative Language for Live Performance\" by Rodrig De Sa",
        "description": "How can immersive sound reshape the way we compose and perform music?\r\nThis talk explores real-world projects where 360° audio becomes a narrative and emotional tool. From violin solos to modular synth lives, 360Prod collaborates with artists to develop portable, sustainable setups that turn space into a stage.",
        "content": "<h5 id=\"➡️-this-presentation-is-part-of-ircam-forum-workshops-paris-engh\"><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<h3>Immersive audio is often approached as a technical challenge ; <strong>but<span style=\"text-decoration: underline;\"> what happens when we treat it as a language ?</span></strong></h3>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e3335f9391dbe27d8be71e6612febab9.png\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5cb388438656a0af36be677cf4909523.png\" /></p>\r\n<p>In this talk, Rodrig De Sa (co-founder of 360Prod) and Charlely Mahaut (sound engineer of 360Prod) &nbsp;presents how his team works hand-in-hand with musicians, composers, and performers to create spatial works where space is not just a parameter, but a dramaturgical force.</p>\r\n<p>From <em>Symphonie des Airs</em> (a solo violin piece using spatial phrasing and dynamic presence) to <em>L&rsquo;&OElig;il du Cyclone</em> (an electro-ambient analog synth show with live spatial movements), the projects discussed span genres but share one core idea: sound moves, and movement means meaning.</p>\r\n<p>360Prod develops the <strong>Soundarium</strong>, a fully battery-powered, solar-charged immersive system designed for touring. With a circular speaker layout and real-time control via SPAT Revolution and OSC interfaces, it allows artists to perform immersive works in places with no infrastructure : from forests to festivals, abandoned spaces to proscenium stages.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/611d65d0f064c33cb9d5c2a9da6f23c6.png\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5c54d939d19f66cceb3d2b851db950d5.png\" /></p>\r\n<p>But the tools are only one side of the story. This talk also reflects on how immersive audio invites new forms of writing, staging, and listening. By collaborating from the very beginning of the creation process, 360Prod helps artists craft experiences that are both technically feasible and emotionally rich.</p>\r\n<p>Spatial audio doesn&rsquo;t have to stay in the lab.</p>\r\n<p>It can travel. It can move people. It can reinvent the way we hear music, together !!</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d7199483715bca57ff9bd7753a8547d7.png\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c930e89b55c6a8caf4ed87f0757cbe78.png\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/5c261b9d76246cb58c4d7ed1bc14e8ba.png\" /></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 2342,
                "name": "3d audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3978,
                "name": "artistic collaboration",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3977,
                "name": "audio performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2341,
                "name": "immersive audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3979,
                "name": "mobile systems",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3976,
                "name": "sound for stage",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3138,
                "name": "spatial sound",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 126031,
            "forum_user": {
                "id": 125865,
                "user": 126031,
                "first_name": "rodrig",
                "last_name": "360Prod",
                "avatar": "https://forum.ircam.fr/media/avatars/Headshot_Rodrig_DE_SA.JPG",
                "avatar_url": "/media/cache/75/53/75532cd254ed077df4a15a0cc30e1e66.jpg",
                "biography": "Rodrig is a musician, sound engineer, and co-founder of 360Prod, a French collective dedicated to making immersive 360° sound accessible to artists and audiences alike. From studio creation to live performance, he places artistic intention at the heart of spatial audio. \nThrough 360Prod, he designs and deploys mobile, autonomous, and eco-responsible systems that bring 3D sound experiences into new spaces : from theaters to outdoor venues ; transforming how sound is perceived and shared.\n\nLet's move sound for muisc from studio to Live !!",
                "date_modified": "2026-02-21T19:49:42.319769+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1169,
                        "forum_user": 125865,
                        "date_start": "2025-07-27",
                        "date_end": "2026-07-27",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "pfiouleson3d",
            "first_name": "rodrig",
            "last_name": "360Prod",
            "bookmarks": []
        },
        "slug": "rewriting-the-score-in-360-immersive-sound-as-a-creative-language-for-live-performance",
        "pk": 4137,
        "published": true,
        "publish_date": "2026-01-04T22:17:50+01:00"
    },
    {
        "title": "Somax version 2.6.1 is out!",
        "description": "This release contains a number of small but important fixes related to real-time corpus recording, as well as a new tutorial for app users.",
        "content": "<ul>\r\n<li><strong>Recording Latency Correction:</strong> The <code>somax.audiorecord</code> object will now automatically adjust the recorded slices based on the latency of the associated audioinfluencer in order to achieve better segmentation. This parameter can be controlled in the corpus recording settings.</li>\r\n</ul>\r\n<p><img style=\"height: 650px;\" alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/14c49dbf4f24a1a934f66e325bffd623.png\" /></p>\r\n<ul>\r\n<li><strong>Audiorecord Sample Rate Mismatch:</strong> The <code>somax.audiorecord</code> object now provides explicit warnings when the user tries to record into an existing corpus based on an audio file with a different sample rate than Max's. This release also fixes a number of bugs related to issues with underlying buffer sample rates. See the \"sample rate mismatch\" tab of the <code>somax.audiorecord</code> maxhelp.</li>\r\n</ul>\r\n<p><img style=\"height: 650px;\" alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b6084dd9500b78dfa82ce0f2b842b9a2.png\" /></p>\r\n<ul>\r\n<li><strong>Real-time Corpus Reloading:</strong> When using multiple record-enabled players, it's now possible to load a corpus into either of the players without causing audio glitches or interrupts to the other players while loading.</li>\r\n<li><strong>\"Script your Environment\" Tutorial:</strong> A new tutorial on preparing your Somax2 environment and controlling the parameters of any .app-object using scripting messages has been added.</li>\r\n</ul>\r\n<p><img style=\"height: 650px;\" alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a4eb3a4d59708f4de9a267264562f3bf.png\" /></p>\r\n<ul>\r\n<li><strong>Various Bug Fixes:</strong> A number of bug fixes and clarifications have been added, as well as documentation updates.</li>\r\n</ul>\r\n<p>Goto to <a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">Somax2 Forum page</a> for installation</p>\r\n<p>See more at <a href=\"http://repmus.ircam.fr/somax2\">Somax2 Project Page </a></p>\r\n<p>Somax2 is an application for musical improvisation and composition using AI with machine listening, cognitive memory activation model, multi-agent architecture, full application interface to agent patching and control, and full Max library API. Somax2 is implemented in <a href=\"https://cycling74.com/products/max/\">Max</a> and Python and is based on a generative AI model to provide real-time machine improvisations coherent both with the internal selected corpus styles and with the unfolding external musical context. Somax2 handles both MIDI and audio input, corpus memory, and output. The model can be used with little configuration to let its agents autonomously interact with musicians (and one with another), but it also allows a variety of manual controls of its generative process and interaction strategies, effectively letting one use it as a fully flexible smart instrument.</p>",
        "topics": [
            {
                "id": 1989,
                "name": "artificial intelligence",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 546,
                "name": "Ia",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1651,
                "name": "Improvisation, générativité et interactions co-créatives",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 545,
                "name": "Repmus team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "somax-version-261-is-out",
        "pk": 2902,
        "published": true,
        "publish_date": "2024-05-29T23:51:48+02:00"
    },
    {
        "title": "Collective Music Interaction Using Network Technologies",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;We developed several flexible systems to design collective music interaction. These systems, based on web technologies and mobile phones, allow for the implementation of various interaction paradigms, from distributed listening systems with hundreds of mobile phones, to collective gesture-based sound control using the embedded smartphone motion sensors. These systems are currently used in different artistic and educational contexts, including concerts with active public participation, installations, dance workshops, as well as in music education.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:893,&quot;3&quot;:{&quot;1&quot;:0},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0}\">We developed several flexible systems to design collective music interaction. These systems, based on web technologies and mobile phones, allow for the implementation of various interaction paradigms, from distributed listening systems with hundreds of mobile phones, to collective gesture-based sound control using the embedded smartphone motion sensors. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;We developed several flexible systems to design collective music interaction. These systems, based on web technologies and mobile phones, allow for the implementation of various interaction paradigms, from distributed listening systems with hundreds of mobile phones, to collective gesture-based sound control using the embedded smartphone motion sensors. These systems are currently used in different artistic and educational contexts, including concerts with active public participation, installations, dance workshops, as well as in music education.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:893,&quot;3&quot;:{&quot;1&quot;:0},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0}\">These systems are currently used in different artistic and educational contexts, including concerts with active public participation, installations, dance workshops, as well as in music education.</span></p>",
        "topics": [],
        "user": {
            "pk": 21,
            "forum_user": {
                "id": 21,
                "user": 21,
                "first_name": "Frederic",
                "last_name": "Bevilacqua",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a5c31b02a13ce493dbe36917564770e5?s=120&d=retro",
                "biography": "Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris, in the joint research lab Science & Technology for Music and Sound – IRCAM – CNRS – Sorbonne Université. His research concerns the interaction between movement and sound and the development of gesture-based interactive systems, with applications in performing arts, education and health.\n\nHe holds a MS in physics and a Ph.D. in Biomedical Optics from EPFL. He  studied music at the Berklee College of Music in Boston. From 1999 to 2003 he was a researcher at the Beckman Laser Institute at the University of California Irvine. In 2003 he joined IRCAM as a researcher on gesture analysis for music and performing arts.\n\nHe co-authored more than 150 scientific publications and co-authored 5 patents. He was keynote or invited speaker at several international conferences such as the ACM TEI’13. He was awarded in 2011 the 1st Prize of the Guthman Musical 1st Prize of the Guthman Musical Instrument Competition (Georgia Tech) and received the award “prix ANR du Numérique” from the French National Research Agency (2013).",
                "date_modified": "2026-01-25T21:51:30.597035+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 12,
                        "forum_user": 21,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-17",
                        "type": 0,
                        "keys": [
                            {
                                "id": 270,
                                "membership": 12
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "bevilacq",
            "first_name": "Frederic",
            "last_name": "Bevilacqua",
            "bookmarks": []
        },
        "slug": "collective-music-interaction-using-network-technologies",
        "pk": 1336,
        "published": true,
        "publish_date": "2022-09-13T13:00:30+02:00"
    },
    {
        "title": "Howling Vault-controlling audio feedback with flocking algorithms in the dome by Jsuk Han",
        "description": "This project aims to utilize feedback sound and flocking algorithms and apply them to 3rd order ambisonic systems in the dome.",
        "content": "<h2></h2>\r\n<p>This project is in a series of projects, an extension of the previous project <em>Howling Bird</em>, which took place at the Ircam Forum in Paris this spring. You can check out the full project through the link below.</p>\r\n<p><a href=\"https://forum.ircam.fr/article/detail/howlingbirds/\">https://forum.ircam.fr/article/detail/howlingbirds/</a></p>\r\n<p>This <em>Howling vault</em> project applied my previous work to an ambisonic dome. In my previous work, <em>Logistic Feedback</em>, I used a single-layer 2D panning method. Then, I visited Ircam Studio early this year to present the project, and I was able to experience audio feedback sound in a spherical sound system. The 3D sound experience in the spherical sound system was more sweet-spot-defining and immersive body experience than the single-layer 2D sound.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/42495f28d23063006e6daa0d31bc6a74.jpg\" /></p>\r\n<p><em><sub>&lt;Howling Vault 1,2,3&gt;, Oksang factory, Seoul, KR</sub></em></p>\r\n<p>&nbsp;</p>\r\n<p>After visiting Paris, I thought I should make my own dome, and I was able to exhibit it as a good opportunity. The dome structure was made of a geodesic structure. (3v 3/8 diameter 6m) I designed a 12.2-channel sound system (5:5:2) on top of the structure, and invited three different background artists (audiovisual artist, traditional instrument player, noise musician) to perform three different performances inside the dome. The archive video is attached below. Here, I used the SPAT5 and panned the sound sources using the vbap 3d panning method. Of course, I applied the flucking algorithm consisting of 16 particles to make the sounds move freely and randomly in the space.</p>\r\n<p>&nbsp;</p>\r\n<p><sub><em>HAN JSUK x Ko Hui. Howling Vault 1</em></sub></p>\r\n<p><sub><a href=\"https://youtu.be/lA1fE98j_eE?si=rJrQqMjN5-1YHv0W\">https://youtu.be/lA1fE98j_eE?si=rJrQqMjN5-1YHv0W</a></sub></p>\r\n<p><sub><em>HAN JSUK x Song Jiyun. Howling Vault 2</em></sub></p>\r\n<p><sub><a href=\"https://youtu.be/CIaVWhWGpnE?si=rjbBlUgEb-L8VJFL\">https://youtu.be/CIaVWhWGpnE?si=rjbBlUgEb-L8VJFL</a></sub></p>\r\n<p><sub><em>Han JSUK x Jin Sangtae. Howling Vault 3</em></sub></p>\r\n<p><sub><a href=\"https://youtu.be/AF_6oxmisKI?si=ntAzHfsY8trQ8yf7\">https://youtu.be/AF_6oxmisKI?si=ntAzHfsY8trQ8yf7</a></sub></p>\r\n<p>&nbsp;</p>\r\n<p>Recently, I constructed the second dome, a 7-meter diameter and 3v 5/8 hemisphere. The audio is configured as 16.2 channels (5:5:5:1). The structure has one more layer than before, doubling its height to 5 meters. (Previously, it was 2.8 meters high.) Next time, I think about making a full-spherical dome structure and a sound system. Also, instead of speakers hanging from the dome structure, I imagine a structure with a speaker system built into the dome structure itself. I also want to add a waterproofing for the sound system and create an outdoor portable concert hall that is easy to install and de-install.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/a8e3d940666f0c693018dca15ee7f598.jpg\" /></p>\r\n<p><em><sub>&lt;Howling Vault 4&gt;, 10th The Air house festival, NamYangJu, KR</sub></em></p>\r\n<p>&nbsp;</p>\r\n<p><em><sub><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0db6c93c999f0b43ccb83f52a411ab75.png\" /></sub></em></p>\r\n<p><em><sub>16ch speaker configuration for Howling Vault 4</sub></em></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 1758,
                "name": "algorithmic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1756,
                "name": "audio feedback",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 271,
                "name": "Dome",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2279,
                "name": "flocking algorithms",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2280,
                "name": "sound installtion",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 38648,
            "forum_user": {
                "id": 38597,
                "user": 38648,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profile03.jpg",
                "avatar_url": "/media/cache/e5/37/e5370105d6ecdc638849d782dca505c5.jpg",
                "biography": "JSUK HAN creates works of sculpture, installation, and sound performance using sound equipment that he has personally collected or produced, including speakers and microphones. Exploring sound output devices and the properties of sound, he has based his creations on research into equipment for converting electrical signals into sound waves and into the physical vibrations of speakers and waves of sound. He has used phenomena of light, sound, vibration, and resonance to spatially represent normally undetectable feedback loops as a form of communication (inputs and outputs, transmission and reception). Han participated in the 2020 ARKO Art Center feature exhibition Follow, Flow, Feed and held the solo exhibition Feedbacker: Ambitious Borderer at the OCI Museum of Art. He has recently been broadening the scope of his work through collaborations with artists in fields such as architecture, circuses, DJing, and subcultures.",
                "date_modified": "2025-12-24T04:35:40.639089+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 720,
                        "forum_user": 38597,
                        "date_start": "2024-02-07",
                        "date_end": "2026-02-07",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "jhan",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "howling-vault-controlling-audio-feedback-with-flocking-algorithms-in-the-dome",
        "pk": 3030,
        "published": true,
        "publish_date": "2024-10-15T13:11:12+02:00"
    },
    {
        "title": "Eric Montalbetti : \"Mode/Scale\" - Serge Lemouton",
        "description": "Un outil de composition assistée par ordinateur Bach",
        "content": "<p>Pr&eacute;sent&eacute; par : Serge Lemouton<br /><a href=\"https://forum.ircam.fr/profile/lemouton/\">Biographie&nbsp;</a></p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3b501489e4a21552d6405a554f28c394.png\" width=\"1623\" height=\"507\" /></p>\r\n<p>Au cours de cette pr&eacute;sentation, nous d&eacute;montrerons l'environnement logiciel (d&eacute;velopp&eacute; &agrave; partir de la biblioth&egrave;que Bach) utilis&eacute; par Eric Montalbetti pour composer dans son sust&egrave;me harmonique personnel. Quelques exemples musicaux seront jou&eacute;s pour illustrer ce syst&egrave;me. Comme beaucoup de compositeurs de sa g&eacute;n&eacute;ration, Eric Montalbetti a cherch&eacute; &agrave; construire son langage musical en conciliant s&eacute;riallisme et modalit&eacute; au sens large. Pour ce faire, il a d&eacute;fini un ensemble de modes et d'&eacute;chelles harmoniques. Il lui appara&icirc;t rapidement que la classification des modes et des &eacute;chelles constitue un syst&egrave;me ferm&eacute; de 18 \"modscales\" respectant un ensemble de r&egrave;gles donn&eacute;es. Il lui est alors apparu n&eacute;cessaire de pouvoir explorer plus syst&eacute;matiquement toutes les caract&eacute;ristiques et les encha&icirc;nements possibles entre ces mod&egrave;les dans toutes leurs transpositions.&nbsp;La machine peut g&eacute;n&eacute;rer toutes sortes d'arp&egrave;ges d&eacute;riv&eacute;s de la structure g&eacute;n&eacute;rative propre &agrave; chaque gamme et v&eacute;rifier la correspondance de tel ou tel fragment musical &agrave; certains de ces arp&egrave;ges. Il est &eacute;galement int&eacute;ressant de pouvoir analyser chaque r&eacute;sultat selon ses qualit&eacute;s plus ou moins s&eacute;rielles (en marquant par un code couleur l'ordre d'apparition des douze tons), ou selon qu'il contient ou non des sym&eacute;tries (soulign&eacute;es sous forme graphique), etc.&nbsp;</p>\r\n<p>L'&eacute;tude de l'ensemble de ces r&eacute;sultats permet de d&eacute;duire d'&eacute;ventuelles connexions ou modulations, ainsi que des oppositions plus ou moins vives, voire l'int&eacute;r&ecirc;t de superpositions polymodales, et donc de mieux contr&ocirc;ler la palette harmonique.&nbsp;<br />Apr&egrave;s avoir beaucoup travaill&eacute; \"&agrave; la main\" et bricol&eacute; quelques patches dans OpenMusic, Eric Montalbetti a ressenti le besoin d'une programmation plus &eacute;labor&eacute;e et a demand&eacute; &agrave; l'IRCAM de disposer d'outils &agrave; la fois mieux cibl&eacute;s et plus facilement administrables et exportables.&nbsp;<br />Serge Lemouton a donc d&eacute;velopp&eacute; un nouveau programme sous Max/Bach qui permet d'explorer une biblioth&egrave;que de modes et d'&eacute;chelles harmoniques donn&eacute;es, de g&eacute;n&eacute;rer diff&eacute;rentes formes d'arp&egrave;ges, et d'analyser n'importe quel fragment musical, qu'il soit d&eacute;fini comme une site de hauteurs pr&eacute;cises ou comme une s&eacute;rie d'intervalles absolus, en fonction de la biblioth&egrave;que harmonique donn&eacute;e. Cet outil de travail, ouvert &agrave; d'autres d&eacute;veloppements encore &agrave; construire, nous semble pouvoir int&eacute;resser tout compositeur ayant des pr&eacute;occupations harmoniques, et c'est donc avec plaisir que nous en pr&eacute;senterons le principe ainsi que quelques exemples musicaux aux membres du Forum.&nbsp;<br /><br /></p>\r\n<p></p>\r\n<p style=\"text-align: center;\"><a href=\"https://www.ericmontalbetti.com/bio\">Biographie de Eric Montalbetti</a></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6cf974b0d546d025bd0760d5fbb0d0d7.webp\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 669,
                "name": "Bach",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 175,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 362,
                "name": "Harmony",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 363,
                "name": "Scales",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 76,
            "forum_user": {
                "id": 76,
                "user": 76,
                "first_name": "Serge",
                "last_name": "Lemouton",
                "avatar": "https://forum.ircam.fr/media/avatars/deborah.jpg",
                "avatar_url": "/media/cache/eb/52/eb52181309dccd2a20b1dc1b54ef0f67.jpg",
                "biography": "Serge Lemouton\n\nréalisateur en informatique musicale Ircam\n\nAprès des études de violon, de musicologie, d'écriture et de composition, Serge Lemouton se spécialise dans les différents domaines de l'informatique musicale au département Sonvs du Conservatoire national supérieur de musique de Lyon. Depuis 1992, il est réalisateur en informatique musicale à l'Ircam. Il collabore avec les chercheurs au développement d'outils informatiques et participe à la réalisation des projets musicaux de compositeurs parmi lesquels Florence Baschet, Laurent Cuniot, Michael Jarrell, Jacques Lenot, Jean-Luc Hervé, Michaël Levinas, Magnus Lindberg, Tristan Murail, Marco Stroppa, Fréderic Durieux et autres. Il a notamment assuré la réalisation et l’interprétation en temps réel de plusieurs œuvres de Philippe Manoury, dont K…, la frontière, On-Iron, Partita 1 et 2, et l’opéra Quartett de Luca Francesconi.\n\nActuellement, il s’intéresse plus particulièrement à la transmission et la préservation des œuvres du répertoire de l’informatique musicale.",
                "date_modified": "2026-02-27T09:18:37.644467+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 25,
                        "forum_user": 76,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [
                            {
                                "id": 276,
                                "membership": 25
                            },
                            {
                                "id": 563,
                                "membership": 25
                            },
                            {
                                "id": 751,
                                "membership": 25
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "lemouton",
            "first_name": "Serge",
            "last_name": "Lemouton",
            "bookmarks": []
        },
        "slug": "eric-montalbetti-modescale",
        "pk": 2766,
        "published": true,
        "publish_date": "2024-02-22T12:41:39+01:00"
    },
    {
        "title": "The Snail: update for macOS/Windows/iOS",
        "description": "Our high precision frequency spectrum analyzer has been updated on both mobile (iOS) and desktop (Mac and Windows) platforms.",
        "content": "<p>The <a href=\"https://apps.apple.com/us/app/the-snail/id1189140204\">iOS update</a> was a long overdue. In fact iOS15, released in Sept 2021, broke The Snail 🐌 🔨 so many users had been expecting this 2.3 update. The Snail is now back on track! Also The Snail is fully compatible with any Apple devices, iphones and ipads of any sizes. The minimum iOS version is 9.0.</p>\r\n<p>On the <a href=\"https://www.plugivery.com/products/p2242-The-Snail/\">desktop side</a>, The Snail continues its resurrection (started with version 1.3 last year) with version 1.4 featuring notably the native compatibity with Silicon (M1) macs, several bug fixes and some structural underlying changes.</p>\r\n<p><img alt=\"\" src=\"/media/uploads/user/db046301404b5b416944ba200cf8c12e.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>While the current version is now being brought more attention and maintenance at a faster pace, let us take advantage of this news letter to give you some overview on The Snail's roadmap:</p>\r\n<p>The <strong>Snail 3</strong> (a number will make everyone - iOS or desktop - even) is on its way and will feature exciting new features, amongst them: temperaments (just intonation, baroque, Indian shrutis, maq&acirc;ms etc.) and new views (harmonicity, bubbles, score), support for Android devices as well as notable performance optimizations. We expect The Snail 3 to be released in 2023. If you wish to enroll the beta testing program, please contact us.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 240,
                "name": "Analyse du son",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 172,
                "name": "Analyse du son",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 379,
                "name": "Analysis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 344,
                "name": "Real-time audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 222,
                "name": "Spectral sound analyzer",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17617,
            "forum_user": {
                "id": 17613,
                "user": 17617,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/65285f24050c7dbd54422824b1a7c7cb?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-08-31T13:33:58.886455+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 737,
                        "forum_user": 17613,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "robert_p",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-snail-update-for-macoswindowsios",
        "pk": 1180,
        "published": true,
        "publish_date": "2022-06-27T17:58:54+02:00"
    },
    {
        "title": "Sonic Interaction: Exploring AR and VR Environments by Sinan Bokesoy",
        "description": "Recently, sonicLAB/sonicPlanet released two AR / VR applications for Apple Vision Pro and Meta Quest3. In this hands-on presentation, we will examine the emerging paradigms and challenges that arose during their development.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p><strong><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/84a577b65f2bdb0de164b9685f65c1dc.png\" width=\"942\" height=\"560\" /></strong></strong></p>\r\n<p></p>\r\n<p>Presented by : Sinan Bokesoy</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/SinanBokesoy/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p><strong>Developing complex sound design software for AR/VR platforms offers new paradigms and possibilities, but also presents significant challenges. In this hands-on presentation, we will showcase two of our applications&mdash;PolyNodes AVP and StarWaves (freely available on the Apple App Store and Meta Store)&mdash;while explaining several key aspects of their development.</strong></p>\r\n<p><strong>www.sonic-lab.com ,<span>&nbsp;</span></strong><strong>www.sonicplanet.com</strong></p>\r\n<p><strong></strong></p>\r\n<p><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/7a21fb1277327993d116fb391055f6e4.jpg\" width=\"773\" height=\"773\" />&nbsp;&nbsp;</strong><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/97602f9711697306c7db3ab7f04c1444.jpg\" width=\"771\" height=\"771\" /></strong></p>\r\n<p></p>\r\n<p>March, 26th</p>",
        "topics": [
            {
                "id": 2656,
                "name": "applevisionpro",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1194,
                "name": "augmented reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2657,
                "name": "meta quest",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2652,
                "name": "polynodes",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2655,
                "name": "sonic interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2658,
                "name": "starwaves",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 301,
                "name": "Virtual reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 15446,
            "forum_user": {
                "id": 15443,
                "user": 15446,
                "first_name": "Sinan",
                "last_name": "Bokesoy",
                "avatar": "https://forum.ircam.fr/media/avatars/sinanportre_png.png",
                "avatar_url": "/media/cache/91/1d/911d705a8e8a4fc32df04be63c997ed8.jpg",
                "biography": "Sinan Bokesoy is an engineer, developer, and sound artist with a PhD in computer music. As the founder of sonicLAB/sonicPlanet, he has transformed his academic expertise into practical tools for composers and producers, designing software instruments that integrate algorithmic approaches with mathematical models and physical processes to create self-evolving sonic structures. Bokesoy’s work has been published and presented at numerous academic institutions and artistic events. Recognized with awards for his innovative developments, he bridges artistic creativity, scientific exploration, and technological innovation—carving out a niche in the audio tech industry.",
                "date_modified": "2026-03-02T17:03:48.699325+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "SinanBokesoy",
            "first_name": "Sinan",
            "last_name": "Bokesoy",
            "bookmarks": []
        },
        "slug": "sonic-interaction-exploring-ar-and-vr-environments",
        "pk": 3292,
        "published": true,
        "publish_date": "2025-02-16T21:24:11+01:00"
    },
    {
        "title": "Fragments 1-11 by Yun Park & Jeanyoon Choi",
        "description": "《Fragments 1-11》 explores how AI opens a new paradigm of artistic creation in the era of generative AI. This exhibition demonstrates a multidimensional process of transformation: text is converted into images, images into 3D forms, 3D into physical objects, and objects back into video, examining the potential of AI to merge and transcend boundaries. 《Fragments 1-11》 delves into the possibilities of media translation and genre fusion by designing programs or machines that transform text into visual forms, and by injecting data that lends volume and texture to images or graphics, producing novel visual results. Through the intentional selection and editing of these generated results, the artist investigates the unique qualities of AI-driven creation, introducing the concept of “Multi-Dimensional Art.”\r\n\r\nPresented at this year’s IRCAM Forum, 〈Fragments 1-11〉 offers fragments inspired by the stories from Genesis 1:1 and 11:9, exploring transformations across languages, media, and dimensions for the audience.",
        "content": "<h2><span><strong>《Fragments 1-11》</strong></span></h2>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c79d3901b21fdf83f814638f3e691f45.png\" /></span></p>\r\n<p style=\"text-align: center;\"><span><span style=\"text-decoration: underline;\"><span>Fig 1. Multi-dimensional transform between 2d to 3d</span></span></span></p>\r\n<p><span><strong>《Fragments 1-11》 is a cyclical journey, navigating through dimensions via diverse forms such as spoken language, images, video, and sound, all derived from an initial text to approach the essence through translation.</strong> This work seeks methods of genuine &ldquo;communication&rdquo;&mdash;approaching the original meaning&mdash;by engaging with today&rsquo;s multidimensional and multimodal mediums, where the residual elements across physical and non-physical dimensions emerge as evidence of the journey and interpretation.</span></p>\r\n<p><span>From the perspective that &ldquo;communication is translation,&rdquo; this work refrains from traditional language or media, instead viewing programs, AI technology, and algorithms as modern tools of communication. For the artist, such tools represent means of interpretation that transcend human limitations, moving beyond any single mode of meaning to enable translation across the material and the immaterial. By navigating these interdimensional translations, the artist extracts objects that serve as cornerstones for new interpretations. The compression and expansion of data in the process are bolstered by technology, carefully preserving the integrity of the original material while intricately filling in the relationships and empty contexts woven between them.</span></p>\r\n<p><span>&nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/33903648798f459ac0ba5cdd2d044ed5.png\" /> </span></p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/604445039e66d1ba79d5d738a2d6eb01.png\" /></span></p>\r\n<p style=\"text-align: center;\"><span style=\"text-decoration: underline;\"><span>Fig 2. Multi-device communication between the main screen and audiences.</span></span></p>\r\n<p><span>《Fragments 1-11》 consists of various dimensional translations stemming from a single dataset, symbolizing a cyclical structure. The initial text is converted into an image via an AI generator; this image then undergoes programming to manifest as a three-dimensional object. The physical object is once again digitized and projected on-screen, representing the dataset. In this cyclical process, visitors encounter fragments that have broken off or been reduced in dimension. What lies before them is the residue of this cycle, a remnant of technology and a piece of something archeological. Through this, the audience recognizes the need for further translation to approach the cyclical structure&rsquo;s meaning and the significance of these remains.</span></p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e9f1cca2789b349ce2b0699275a922ba.png\" /> </span></p>\r\n<p><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0e0efab5537f88930b2277a422b74e25.png\" /></span></p>\r\n<p style=\"text-align: center;\"><span><span style=\"text-decoration: underline;\"><span>Fig 3. Multi-device communication between the main screen and audiences.</span></span></span></p>\r\n<p><span>As the saying &ldquo;translation is betrayal (traduttore traditore)&rdquo; suggests, translating one meaning into another medium can distort the original, and multiple interpretations of a text have led to misunderstanding and even misfortune. This work extends the limits and gaps created by translation beyond time into dimensional stages. The artist concentrates on the materials and data produced repeatedly beyond the limits of human labor and the potential that resides behind these processed outcomes. Rather than viewing translation as inferior to the original, the work acknowledges the potential to discover an intrinsic value previously undisclosed. AI and technology, as collaborators in this cyclical journey, become tools that help us approach essence and balance in a fragmented and dispersed era. Through iterative translation, the artist fills in the missing pieces between the compressed and expanded, selectively eliminating what has become excessive. This approach aims to provide clues to understand the world beyond oneself in finer detail and broaden one&rsquo;s scope of understanding.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>The cyclical processes and actions condensed within this work resemble the reality of our fast-paced, compressed world. Spaces and objects constructed in our real world exist both as complete entities and as fragments or cases of affirmation or negation. Even seemingly contrasting skyscrapers and period houses are pieces separated from an original source, inherently containing the essence, like &ldquo;mirrors emitting light of their own.&rdquo; Using multidimensional fragments as evidence, 《Fragments 1-11》 illustrates that technology has the potential to transcend the physical requirements and temporal constraints of reality, providing a means to reach the essence and break down boundaries across perspectives.</span></p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1945,
                "name": "generative ai",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 752,
                "name": "javascript",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2319,
                "name": "Multi-Device Web Artwork",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2352,
                "name": "multi-dimensional artwork",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 85723,
            "forum_user": {
                "id": 85621,
                "user": 85723,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/1105_stbuns-19005a_NoLogo_LowQuality.jpg",
                "avatar_url": "/media/cache/a9/9a/a99a6e5197aa4cc5ce799d7dcf3b261e.jpg",
                "biography": "Yun Park (b. 1989) is a multi-dimensional artist based in London and Seoul, who explores the intersection of the material and immaterial through innovative use of AI. By translating 2D and 3D images, as well as videos, into physical objects and then reconverting them, Yun Park investigates the boundaries between the digital and physical realms. This iterative process allows for a deep exploration of the relationship between the tangible and intangible, challenging conventional perceptions of reality. Yun Park holds an MA from the Royal College of Art. Prior to this, He earned a BA from Hongik University in Seoul. This diverse educational background equips Yun Park with a strong foundation in both traditional craftsmanship and contemporary digital practices.\nIn 2024, Yun Park participated in prominent exhibitions in Korea, including Arko Young Artist Day at Arko Theater, “Fragments of Babel” and APE Camp in Seoul, all supported by the Art Council Korea. He is invited as an artist for IRCAM Forum Seoul 2024. Their work has also gained international recognition, such as contributing artwork to the BBC’s King Charles III’s Coronation Concert at Windsor Castle in 2023.",
                "date_modified": "2024-10-26T08:23:57.949548+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "yunp",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "fragments-1-11-by-yun-park-jeanyoon-choi",
        "pk": 3080,
        "published": true,
        "publish_date": "2024-10-26T09:04:14+02:00"
    },
    {
        "title": "TWIST! by Jonathan Pitkin",
        "description": "A composition for virtual virtuoso vocalist, created using ISiS",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><em>TWIST!&nbsp;</em>is a compositional study which takes advantage of the unique capabilities of the ISiS vocal synthesizer, exploring extremes of musical speed, agility, precision, stamina and clarity of articulation that would otherwise be impossible. The &lsquo;libretto&rsquo; consists almost entirely of French tongue-twisters, and the only other sounds used are sampled footsteps which help to create the illusion of a singer that (/who) is virtually present in the performance space, moving around the audience in both realistic and rather more disruptive ways. The &lsquo;performer&rsquo; of TWIST! may appear to take on various characteristics at different times: tentative, exploratory, didactic, cheeky, ostentatiously virtuosic, obsessive and even hysterical.</p>\r\n<p><img alt=\"Screenshot of Max patching and text files\" src=\"https://forum.ircam.fr/media/uploads/user/d6a0240f5c09d65d4214a9019c036c6c.png\" /></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 26,
                "name": "Isis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 6368,
            "forum_user": {
                "id": 6365,
                "user": 6368,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/JP_side3_small_square.jpeg",
                "avatar_url": "/media/cache/2a/9c/2a9cc25f5314a68a5f9ca48830203669.jpg",
                "biography": "Jonathan Pitkin is a British composer whose music increasingly involves the use of new technology, whether in the production of sound or in the reconfiguration and expansion of familiar instruments, made to behave in unexpected ways which suggest that they may have minds of their own. He works around the edges of popular and classical, performance and installation, and liveness and automation.\n\nJonathan's work has featured at the Huddersfield, Spitalfields and New York City Electroacoustic Music Festivals, the IRCAM Forum Ateliers and the CIME General Assembly. His output includes works for Disklavier, Magnetic Resonator Piano, circular piano, and singing synthesizer, as well as installations, emulations, pedagogical software and composers' tools. His published writings include contributions to the proceedings of NIME and the ICMC, and edited volumes published by SAGE and Routledge.\n\nJonathan teaches Composition and Academic studies at the Royal College of Music, London.",
                "date_modified": "2026-02-03T13:23:21.199106+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "JonathanPITKIN",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "twist-by-jonathan-pitkin-1",
        "pk": 4287,
        "published": true,
        "publish_date": "2026-01-29T11:31:28+01:00"
    },
    {
        "title": "Tape Loop Workshop by Stegonaute",
        "description": "Tape Loops : Turning Cassettes into Instruments\r\n\r\n\r\nThis workshop explores tape loops as a hands-on, creative tool. Starting from commercial audio cassettes, we will physically dismantle and transform them into continuous tape loops. Using vintage 4-track cassette recorders, we will explore repetition, imperfection, and material sound as compositional forces, embracing chance, mechanical instability, and tactile interaction with analog media.",
        "content": "<h2></h2>\r\n<p>&nbsp;<strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p></p>\r\n<h2><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9e58f27c2313a7b5756dee817dcb07dd.jpg\" /></h2>\r\n<h2>&nbsp;</h2>\r\n<h2>&nbsp;</h2>\r\n<h2>Tape Loops as a Physical and Musical Practice</h2>\r\n<p>In this workshop, I focus on tape loop techniques using compact audio cassettes as both sound carriers and musical objects. Starting from commercial mass-produced tapes I deliberately embrace re-use, transformation, and deviation from the cassette&rsquo;s original function. The cassette is no longer a storage medium, but becomes an instrument somewhere between a very simple Mellotron and a random sequencer.</p>\r\n<p>By dismantling, cutting, and reassembling cassette tapes, participants create continuous tape loops. These loops introduce repetition, phase shifts, instability, and erosion as central musical parameters.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/08a781942a01e4fc3981397056c3c7b4.jpg\" /></p>\r\n<h2>&nbsp;</h2>\r\n<h2>&nbsp;</h2>\r\n<h2>Analog Constraints as Creative Forces</h2>\r\n<p>Using vintage 4-track cassette recorders (Tascam and Fostex), I explore how mechanical limitations such as wow and flutter, <strong>NOISE</strong>, dropouts, and saturation can be transformed into expressive qualities. These machines read cassette tapes in a single direction only, which allows recorded tape loops to be physically flipped and played backwards without any digital processing. This simple mechanical inversion opens up rich sonic possibilities, revealing reversed envelopes, altered attacks, and unfamiliar temporal structures that fundamentally reshape the perception of sound and gesture.</p>\r\n<p>In addition, I introduce a common \"analog\" modification consisting in placing a small piece of aluminum foil over the erase head. This effectively disables the erasure process, allowing continuous overdubbing on the tape, to create the famous \"sound on sound\" effect.</p>\r\n<p>The absence of digital synchronization and precise control opens a space where timing drifts, textures evolve unpredictably, and sound remains in constant motion.&nbsp;</p>\r\n<p>Rather than correcting imperfections, I invite participants to listen to them carefully and to compose with them.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/47da14ace24371ab589099d50c398ec8.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<h2>&nbsp;</h2>\r\n<h2>Tape Loops, Memory, and Ecology</h2>\r\n<p>Working with obsolete or discarded media also raises ecological and symbolic questions. By re-using old cassettes and tape machines, I challenge linear narratives of technological progress and propose a more circular approach to sound production. Tape loops embody memory in motion: sometimes I keep the printed material on the tape and these recorded fragments are endlessly replayed, gradually transformed by time, friction, and material wear.</p>\r\n<p>This approach situates sound creation within a material, ecological, historical and sometimes personal perspective, where listening becomes an act of care toward fragile media.</p>\r\n<h2><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f1c6d8d2fe4bf0beac8fcb8ebb21ca27.jpg\" /></h2>\r\n<h2>&nbsp;</h2>\r\n<h2>&nbsp;</h2>\r\n<h2>Collective Exploration and Listening</h2>\r\n<p>The workshop alternates between hands-on construction, collective listening, and improvisation sessions. Participants share their loops, experiment with layering and live manipulation, and reflect on how repetition, decay, and duration shape musical form. I love discover how participants bring new perspectives to me at each workshop.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/983b200d4dee22a1df1d9677e5bbc908.jpg\" /></p>\r\n<p>The goal is not technical mastery, but the development of a sensitive relationship to sound, time, and materiality where composition emerges from touch, listening, attentive presence and often from randomness.</p>",
        "topics": [
            {
                "id": 4127,
                "name": "cassette",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 143,
                "name": "Ecology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4128,
                "name": "lofi",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1804,
                "name": "loop",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2659,
                "name": "randomness",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4126,
                "name": "tape",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4129,
                "name": "transmission",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20204,
            "forum_user": {
                "id": 20196,
                "user": 20204,
                "first_name": "Stego",
                "last_name": "Naute",
                "avatar": "https://forum.ircam.fr/media/avatars/Photo_Stegonaute_01.jpg",
                "avatar_url": "/media/cache/a2/c0/a2c0038212d9d4eacaed7a6b2d2da0d1.jpg",
                "biography": "Ambient Composer",
                "date_modified": "2026-02-06T18:34:37.175281+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "stegorec",
            "first_name": "Stego",
            "last_name": "Naute",
            "bookmarks": []
        },
        "slug": "tape-loop-workshop-by-stegonaute",
        "pk": 4288,
        "published": true,
        "publish_date": "2026-01-29T22:31:02+01:00"
    },
    {
        "title": "sie sucht ihn für sex",
        "description": "https://siesuchtihnsex.net/",
        "content": "<p><a href=\"https://siesuchtihnsex.net/\"><span style=\"\">https://siesuchtihnsex.net/</span></a></p>\n<p><a href=\"https://x.com/siesuchtihnsex\"><span style=\"\">https://x.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.tumblr.com/siesuchtihnsex\"><span style=\"\">https://www.tumblr.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.twitch.tv/siesuchtihnsex1/about\"><span style=\"\">https://www.twitch.tv/siesuchtihnsex1/about</span></a></p>\n<p><a href=\"https://www.reddit.com/user/siesuchtihnsex/\"><span style=\"\">https://www.reddit.com/user/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://www.youtube.com/@siesuchtihnsex\"><span style=\"\">https://www.youtube.com/@siesuchtihnsex</span></a></p>\n<p><a href=\"https://gravatar.com/siesuchtihnsex\"><span style=\"\">https://gravatar.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.behance.net/siesuchtihnsex\"><span style=\"\">https://www.behance.net/siesuchtihnsex</span></a></p>\n<p><a href=\"https://photozou.jp/user/top/3447236\"><span style=\"\">https://photozou.jp/user/top/3447236</span></a></p>\n<p><a href=\"https://www.quora.com/profile/Siesuchtsex\"><span style=\"\">https://www.quora.com/profile/Siesuchtsex</span></a></p>\n<p><a href=\"https://taittsuu.com/users/siesuchtihnsex\"><span style=\"\">https://taittsuu.com/users/siesuchtihnsex</span></a></p>\n<p><a href=\"https://savee.com/siesuchtihnsex/\"><span style=\"\">https://savee.com/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://fileforums.com/member.php?u=297560\"><span style=\"\">https://fileforums.com/member.php?u=297560</span></a></p>\n<p><a href=\"https://app.readthedocs.org/profiles/siesuchtihnsex/\"><span style=\"\">https://app.readthedocs.org/profiles/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://pxhere.com/en/photographer/4966276\"><span style=\"\">https://pxhere.com/en/photographer/4966276</span></a></p>\n<p><a href=\"https://code.antopie.org/siesuchtihnsex\"><span style=\"\">https://code.antopie.org/siesuchtihnsex</span></a></p>\n<p><a href=\"https://gesoten.com/profile/detail/12689367\"><span style=\"\">https://gesoten.com/profile/detail/12689367</span></a></p>\n<p><a href=\"https://connect.gt/user/siesuchtihnsex\"><span style=\"\">https://connect.gt/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.jointcorners.com/siesuchtihnsex\"><span style=\"\">https://www.jointcorners.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://participation.bordeaux.fr/profiles/siesuchtihnsex/activity\"><span style=\"\">https://participation.bordeaux.fr/profiles/siesuchtihnsex/activity</span></a></p>\n<p><a href=\"https://participa.aytojaen.es/profiles/siesuchtihnsex/activity\"><span style=\"\">https://participa.aytojaen.es/profiles/siesuchtihnsex/activity</span></a></p>\n<p><a href=\"https://participer.valdemarne.fr/profiles/siesuchtihnsex/activity\"><span style=\"\">https://participer.valdemarne.fr/profiles/siesuchtihnsex/activity</span></a></p>\n<p><a href=\"https://entre-vos-mains.alsace.eu/profiles/siesuchtihnsex/activity\"><span style=\"\">https://entre-vos-mains.alsace.eu/profiles/siesuchtihnsex/activity</span></a></p>\n<p><a href=\"https://jobs.siliconflorist.com/employers/4089810-siesuchtihnsex\"><span style=\"\">https://jobs.siliconflorist.com/employers/4089810-siesuchtihnsex</span></a></p>\n<p><a href=\"https://letterboxd.com/siesuchtihnsex/\"><span style=\"\">https://letterboxd.com/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://routinehub.co/user/siesuchtihnsex\"><span style=\"\">https://routinehub.co/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://zimexapp.co.zw/siesuchtihnsex\"><span style=\"\">https://zimexapp.co.zw/siesuchtihnsex</span></a></p>\n<p><a href=\"https://cointr.ee/siesuchtihnsex\"><span style=\"\">https://cointr.ee/siesuchtihnsex</span></a></p>\n<p><a href=\"https://zrzutka.pl/profile/siesuchtihnsex-267093\"><span style=\"\">https://zrzutka.pl/profile/siesuchtihnsex-267093</span></a></p>\n<p><a href=\"https://civitai.com/user/siesuchtihnsex\"><span style=\"\">https://civitai.com/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://rotorbuilds.com/profile/209853/\"><span style=\"\">https://rotorbuilds.com/profile/209853/</span></a></p>\n<p><a href=\"https://pixelfed.uno/siesuchtihnsex\"><span style=\"\">https://pixelfed.uno/siesuchtihnsex</span></a></p>\n<p><a href=\"https://3dlancer.net/profile/u1141710\"><span style=\"\">https://3dlancer.net/profile/u1141710</span></a></p>\n<p><a href=\"https://findpenguins.com/siesuchtihnsex\"><span style=\"\">https://findpenguins.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://naijamatta.com/siesuchtihnsex\"><span style=\"\">https://naijamatta.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.elephantjournal.com/profile/siesuchtihnsex/\"><span style=\"\">https://www.elephantjournal.com/profile/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://www.beamng.com/members/siesuchtsex.783638/\"><span style=\"\">https://www.beamng.com/members/siesuchtsex.783638/</span></a></p>\n<p><a href=\"https://medibang.com/author/28082390/\"><span style=\"\">https://medibang.com/author/28082390/</span></a></p>\n<p><a href=\"https://audio.com/siesuchtihnsex\"><span style=\"\">https://audio.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://forums.maxperformanceinc.com/forums/member.php?u=243976\"><span style=\"\">https://forums.maxperformanceinc.com/forums/member.php?u=243976</span></a></p>\n<p><a href=\"https://forum.aigato.vn/user/siesuchtihnsex\"><span style=\"\">https://forum.aigato.vn/user/siesuchtihnsex</span></a></p>\n<p><a href=\"http://www.genina.com/user/editDone/5254797.page\"><span style=\"\">http://www.genina.com/user/editDone/5254797.page</span></a></p>\n<p><a href=\"https://malt-orden.info/userinfo.php?uid=453879\"><span style=\"\">https://malt-orden.info/userinfo.php?uid=453879</span></a></p>\n<p><a href=\"https://www.iglinks.io/SuciaAturo737-5by?preview=true\"><span style=\"\">https://www.iglinks.io/SuciaAturo737-5by?preview=true</span></a></p>\n<p><a href=\"https://heylink.me/suciaaturo737/\"><span style=\"\">https://heylink.me/suciaaturo737/</span></a></p>\n<p><a href=\"https://www.hostboard.com/forums/members/siesuchtihnsex.html\"><span style=\"\">https://www.hostboard.com/forums/members/siesuchtihnsex.html</span></a></p>\n<p><a href=\"https://infiniteabundance.mn.co/members/39096432\"><span style=\"\">https://infiniteabundance.mn.co/members/39096432</span></a></p>\n<p><a href=\"https://cinderella.pro/user/270023/siesuchtihnsex/\"><span style=\"\">https://cinderella.pro/user/270023/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://www.deafvideo.tv/vlogger/siesuchtihnse?o=mv\"><span style=\"\">https://www.deafvideo.tv/vlogger/siesuchtihnse?o=mv</span></a></p>\n<p><a href=\"https://cornucopia.se/author/siesuchtihnsex/\"><span style=\"\">https://cornucopia.se/author/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://www.chaloke.com/forums/users/siesuchtihnsex/\"><span style=\"\">https://www.chaloke.com/forums/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://armchairjournal.com/forums/users/siesuchtihnsex/\"><span style=\"\">https://armchairjournal.com/forums/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://gamblingtherapy.org/forum/users/siesuchtihnsex/\"><span style=\"\">https://gamblingtherapy.org/forum/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://amaz0ns.com/forums/users/siesuchtihnsex/\"><span style=\"\">https://amaz0ns.com/forums/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://aprenderfotografia.online/usuarios/siesuchtihnsex/profile/\"><span style=\"\">https://aprenderfotografia.online/usuarios/siesuchtihnsex/profile/</span></a></p>\n<p><a href=\"https://myanimeshelf.com/profile/siesuchtihnsex\"><span style=\"\">https://myanimeshelf.com/profile/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.mindomo.com/outline/siesuchtihnsex-copy-a7d508d248484094a404cff70b19e669\"><span style=\"\">https://www.mindomo.com/outline/siesuchtihnsex-copy-a7d508d248484094a404cff70b19e669</span></a></p>\n<p><a href=\"https://3dtoday.ru/blogs/siesuchtihnsex\"><span style=\"\">https://3dtoday.ru/blogs/siesuchtihnsex</span></a></p>\n<p><a href=\"https://viblo.asia/p/siesuchtihnsex-37LdeQMMVov\"><span style=\"\">https://viblo.asia/p/siesuchtihnsex-37LdeQMMVov</span></a></p>\n<p><a href=\"https://gitlab.haskell.org/siesuchtihnsex\"><span style=\"\">https://gitlab.haskell.org/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.zubersoft.com/mobilesheets/forum/user-130588.html\"><span style=\"\">https://www.zubersoft.com/mobilesheets/forum/user-130588.html</span></a></p>\n<p><a href=\"https://www.fitday.com/fitness/forums/members/siesuchtihnsex.html\"><span style=\"\">https://www.fitday.com/fitness/forums/members/siesuchtihnsex.html</span></a></p>\n<p><a href=\"https://undrtone.com/siesuchtihnsex\"><span style=\"\">https://undrtone.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.niftygateway.com/@siesuchtihnsex/\"><span style=\"\">https://www.niftygateway.com/@siesuchtihnsex/</span></a></p>\n<p><a href=\"https://disqus.com/by/siesuchtsex/about/\"><span style=\"\">https://disqus.com/by/siesuchtsex/about/</span></a></p>\n<p><a href=\"https://www.bunity.com/-f2e4b668-10a1-4a9-aae0-siesuchtihnsex\"><span style=\"\">https://www.bunity.com/-f2e4b668-10a1-4a9-aae0-siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.livejournal.com/profile/?userid=102740241&amp;t=I\"><span style=\"\">https://www.livejournal.com/profile/?userid=102740241&amp;t=I</span></a></p>\n<p><a href=\"https://www.fundable.com/siesuch-tsex\"><span style=\"\">https://www.fundable.com/siesuch-tsex</span></a></p>\n<p><a href=\"https://bbs.mofang.com.tw/home.php?mod=space&amp;uid=2439413\"><span style=\"\">https://bbs.mofang.com.tw/home.php?mod=space&amp;uid=2439413</span></a></p>\n<p><a href=\"https://bbs.airav.cc/home.php?mod=space&amp;uid=4523459\"><span style=\"\">https://bbs.airav.cc/home.php?mod=space&amp;uid=4523459</span></a></p>\n<p><a href=\"https://londonchinese.com/home.php?mod=space&amp;uid=618713&amp;do=profile\"><span style=\"\">https://londonchinese.com/home.php?mod=space&amp;uid=618713&amp;do=profile</span></a></p>\n<p><a href=\"https://123.briian.com/member.php?mod=4dH2djl736\"><span style=\"\">https://123.briian.com/member.php?mod=4dH2djl736</span></a></p>\n<p><a href=\"https://mforum.cari.com.my/home.php?mod=space&amp;uid=3392698&amp;do=profile\"><span style=\"\">https://mforum.cari.com.my/home.php?mod=space&amp;uid=3392698&amp;do=profile</span></a></p>\n<p><a href=\"https://snippet.host/wtsgra\"><span style=\"\">https://snippet.host/wtsgra</span></a></p>\n<p><a href=\"http://techou.jp/index.php?siesuchtihnsex\"><span style=\"\">http://techou.jp/index.php?siesuchtihnsex</span></a></p>\n<p><a href=\"https://rush1989.rash.jp/pukiwiki/index.php?siesuchtihnsex\"><span style=\"\">https://rush1989.rash.jp/pukiwiki/index.php?siesuchtihnsex</span></a></p>\n<p><a href=\"https://bbcovenant.guildlaunch.com/users/blog/6747763/?mode=view&amp;gid=97523\"><span style=\"\">https://bbcovenant.guildlaunch.com/users/blog/6747763/?mode=view&amp;gid=97523</span></a></p>\n<p><a href=\"https://walling.app/ZIxx9wQFVyTGdaRrmTxc/untitled\"><span style=\"\">https://walling.app/ZIxx9wQFVyTGdaRrmTxc/untitled</span></a></p>\n<p><a href=\"https://hashnode.com/edit/cmngd54rd005f2eld5sjb6ndj\"><span style=\"\">https://hashnode.com/edit/cmngd54rd005f2eld5sjb6ndj</span></a></p>\n<p><a href=\"https://www.fruitpickingjobs.com.au/forums/users/siesuchtihnsex/\"><span style=\"\">https://www.fruitpickingjobs.com.au/forums/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://dentaltechnician.org.uk/community/profile/siesuchtihnsex/\"><span style=\"\">https://dentaltechnician.org.uk/community/profile/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://aboutnursernjobs.com/author/siesuchtihnsex/\"><span style=\"\">https://aboutnursernjobs.com/author/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://www.zazzle.com/mbr/238644798566747733\"><span style=\"\">https://www.zazzle.com/mbr/238644798566747733</span></a></p>\n<p><a href=\"https://www.outdooractive.com/en/member/siesucht-ihnsex/337702208/\"><span style=\"\">https://www.outdooractive.com/en/member/siesucht-ihnsex/337702208/</span></a></p>\n<p><a href=\"https://ekcochat.com/siesuchtihnsex\"><span style=\"\">https://ekcochat.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.babelcube.com/user/siesuc-htsex\"><span style=\"\">https://www.babelcube.com/user/siesuc-htsex</span></a></p>\n<p><a href=\"https://maiotaku.com/p/siesuchtihnsex\"><span style=\"\">https://maiotaku.com/p/siesuchtihnsex</span></a></p>\n<p><a href=\"https://findaspring.org/members/siesuchtihnsex/\"><span style=\"\">https://findaspring.org/members/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://pledgeme.co.nz/profiles/326738\"><span style=\"\">https://pledgeme.co.nz/profiles/326738</span></a></p>\n<p><a href=\"https://www.iniuria.us/forum/member.php?668593-siesuchtihnsex\"><span style=\"\">https://www.iniuria.us/forum/member.php?668593-siesuchtihnsex</span></a></p>\n<p><a href=\"https://formulamasa.com/elearning/members/siesuchtihnsex/?v=96b62e1dce57\"><span style=\"\">https://formulamasa.com/elearning/members/siesuchtihnsex/?v=96b62e1dce57</span></a></p>\n<p><a href=\"https://b.cari.com.my/home.php?mod=space&amp;uid=3392758&amp;do=profile\"><span style=\"\">https://b.cari.com.my/home.php?mod=space&amp;uid=3392758&amp;do=profile</span></a></p>\n<p><a href=\"https://liulo.fm/siesuchtihnsex\"><span style=\"\">https://liulo.fm/siesuchtihnsex</span></a></p>\n<p><a href=\"https://wefunder.com/siesuchtihnsex\"><span style=\"\">https://wefunder.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://golosknig.com/profile/siesuchtihnsex/\"><span style=\"\">https://golosknig.com/profile/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://doodleordie.com/profile/siesuchtihnsex\"><span style=\"\">https://doodleordie.com/profile/siesuchtihnsex</span></a></p>\n<p><a href=\"https://jobs.lajobsportal.org/profiles/8095747-sie-sucht-ihn-sex\"><span style=\"\">https://jobs.lajobsportal.org/profiles/8095747-sie-sucht-ihn-sex</span></a></p>\n<p><a href=\"https://replit.com/@siesuchtihnsex\"><span style=\"\">https://replit.com/@siesuchtihnsex</span></a></p>\n<p><a href=\"https://ketcau.com/member/126440-siesuchtihn/visitormessage/314476-visitor-message-from-siesuchtihn#post314476\"><span style=\"\">https://ketcau.com/member/126440-siesuchtihn/visitormessage/314476-visitor-message-from-siesuchtihn#post314476</span></a></p>\n<p><a href=\"https://secondstreet.ru/profile/siesuchtihnsex/\"><span style=\"\">https://secondstreet.ru/profile/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://nhattao.com/members/siesuchtihnsx.6943723/\"><span style=\"\">https://nhattao.com/members/siesuchtihnsx.6943723/</span></a></p>\n<p><a href=\"https://community.m5stack.com/user/siesuchtihnsex\"><span style=\"\">https://community.m5stack.com/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://jobs.windomnews.com/profiles/8095757-sie-sucht-ihn-sex\"><span style=\"\">https://jobs.windomnews.com/profiles/8095757-sie-sucht-ihn-sex</span></a></p>\n<p><a href=\"https://www.scener.com/@siesuchtihnsex\"><span style=\"\">https://www.scener.com/@siesuchtihnsex</span></a></p>\n<p><a href=\"https://demo.wowonder.com/siesuchtihnsex\"><span style=\"\">https://demo.wowonder.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.inkitt.com/siesuchtihnsex\"><span style=\"\">https://www.inkitt.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.floodzonebrewery.com/profile/suciaaturo73790581/profile\"><span style=\"\">https://www.floodzonebrewery.com/profile/suciaaturo73790581/profile</span></a></p>\n<p><a href=\"https://volleypedia.org/index.php?qa=user&amp;qa_1=siesuchtihnsex\"><span style=\"\">https://volleypedia.org/index.php?qa=user&amp;qa_1=siesuchtihnsex</span></a></p>\n<p><a href=\"https://eo-college.org/members/siesuchtihnsex/\"><span style=\"\">https://eo-college.org/members/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://www.brownbook.net/business/54970290/siesuchtihnsex\"><span style=\"\">https://www.brownbook.net/business/54970290/siesuchtihnsex</span></a></p>\n<p><a href=\"https://qiita.com/suciaaturo737\"><span style=\"\">https://qiita.com/suciaaturo737</span></a></p>\n<p><a href=\"https://makeagif.com/user/siesuchtihnsex?ref=kcqnJn\"><span style=\"\">https://makeagif.com/user/siesuchtihnsex?ref=kcqnJn</span></a></p>\n<p><a href=\"http://galeria.farvista.net/member.php?action=showprofile&amp;user_id=74918\"><span style=\"\">http://galeria.farvista.net/member.php?action=showprofile&amp;user_id=74918</span></a></p>\n<p><a href=\"https://www.fitlynk.com/549f62a35\"><span style=\"\">https://www.fitlynk.com/549f62a35</span></a></p>\n<p><a href=\"https://www.plotterusati.it/user/sie-sucht-ihn-sex\"><span style=\"\">https://www.plotterusati.it/user/sie-sucht-ihn-sex</span></a></p>\n<p><a href=\"https://poipiku.com/13403653/\"><span style=\"\">https://poipiku.com/13403653/</span></a></p>\n<p><a href=\"https://www.vid419.com/home.php?mod=space&amp;uid=3482478\"><span style=\"\">https://www.vid419.com/home.php?mod=space&amp;uid=3482478</span></a></p>\n<p><a href=\"https://www.play56.net/home.php?mod=space&amp;uid=6091427\"><span style=\"\">https://www.play56.net/home.php?mod=space&amp;uid=6091427</span></a></p>\n<p><a href=\"https://www.circleme.com/siesuchtihnsex\"><span style=\"\">https://www.circleme.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://lamsn.com/home.php?mod=space&amp;uid=1921502\"><span style=\"\">https://lamsn.com/home.php?mod=space&amp;uid=1921502</span></a></p>\n<p><a href=\"https://protocol.ooo/ja/users/sie-sucht-ihn-sex\"><span style=\"\">https://protocol.ooo/ja/users/sie-sucht-ihn-sex</span></a></p>\n<p><a href=\"https://truckymods.io/user/478223\"><span style=\"\">https://truckymods.io/user/478223</span></a></p>\n<p><a href=\"https://www.slmath.org/people/103155?reDirectFrom=link\"><span style=\"\">https://www.slmath.org/people/103155?reDirectFrom=link</span></a></p>\n<p><a href=\"https://cannabis.net/user/220741\"><span style=\"\">https://cannabis.net/user/220741</span></a></p>\n<p><a href=\"https://md.entropia.de/s/ytNi8gyHM\"><span style=\"\">https://md.entropia.de/s/ytNi8gyHM</span></a></p>\n<p><a href=\"https://odesli.co/vgd3drkppxm4x\"><span style=\"\">https://odesli.co/vgd3drkppxm4x</span></a></p>\n<p><a href=\"https://mygamedb.com/profile/siesuchtihnsex\"><span style=\"\">https://mygamedb.com/profile/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.claimajob.com/profiles/8095835-sie-sucht-ihn-sex\"><span style=\"\">https://www.claimajob.com/profiles/8095835-sie-sucht-ihn-sex</span></a></p>\n<p><a href=\"https://www.facekindle.com/siesuchtihnsex\"><span style=\"\">https://www.facekindle.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://securityheaders.com/?q=https%3A%2F%2Fsiesuchtihnsex.net%2F\"><span style=\"\">https://securityheaders.com/?q=https%3A%2F%2Fsiesuchtihnsex.net%2F</span></a></p>\n<p><a href=\"http://www.askmap.net/location/7779545/vietnam/sie-sucht-ihn-sex\"><span style=\"\">http://www.askmap.net/location/7779545/vietnam/sie-sucht-ihn-sex</span></a></p>\n<p><a href=\"https://fanclove.jp/profile/wyWevAqxB0\"><span style=\"\">https://fanclove.jp/profile/wyWevAqxB0</span></a></p>\n<p><a href=\"https://scanverify.com/siteverify.php?site=https://siesuchtihnsex.net/\"><span style=\"\">https://scanverify.com/siteverify.php?site=https://siesuchtihnsex.net/</span></a></p>\n<p><a href=\"https://pumpyoursound.com/u/user/1597818\"><span style=\"\">https://pumpyoursound.com/u/user/1597818</span></a></p>\n<p><a href=\"https://uiverse.io/profile/siesuchtih_4797\"><span style=\"\">https://uiverse.io/profile/siesuchtih_4797</span></a></p>\n<p><a href=\"https://camp-fire.jp/profile/siesuchtihnsex\"><span style=\"\">https://camp-fire.jp/profile/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.dailymotion.com/user/siesuchtihnsex\"><span style=\"\">https://www.dailymotion.com/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://edabit.com/user/Yutya4qotEmQNQzzE\"><span style=\"\">https://edabit.com/user/Yutya4qotEmQNQzzE</span></a></p>\n<p><a href=\"https://mecabricks.com/en/user/siesuchtihnsex\"><span style=\"\">https://mecabricks.com/en/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://awan.pro/forum/user/157549/\"><span style=\"\">https://awan.pro/forum/user/157549/</span></a></p>\n<p><a href=\"https://hackmd.hub.yt/s/2fqQN1S1v\"><span style=\"\">https://hackmd.hub.yt/s/2fqQN1S1v</span></a></p>\n<p><a href=\"https://siesuchtihnsex.blogpayz.com/profile\"><span style=\"\">https://siesuchtihnsex.blogpayz.com/profile</span></a></p>\n<p><a href=\"https://md.coredump.ch/s/DLWZkR5Nu\"><span style=\"\">https://md.coredump.ch/s/DLWZkR5Nu</span></a></p>\n<p><a href=\"https://idol.st/user/154480/siesuchtihnsex/\"><span style=\"\">https://idol.st/user/154480/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://selficlub.com/siesuchtihnsex\"><span style=\"\">https://selficlub.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://pad.flipdot.org/s/kyGriErZm\"><span style=\"\">https://pad.flipdot.org/s/kyGriErZm</span></a></p>\n<p><a href=\"https://coinfolk.net/user/siesuchtihnsex\"><span style=\"\">https://coinfolk.net/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://bandori.party/user/710308/siesuchtihnsex/\"><span style=\"\">https://bandori.party/user/710308/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://pads.zapf.in/s/GAm2BwCfoP\"><span style=\"\">https://pads.zapf.in/s/GAm2BwCfoP</span></a></p>\n<p><a href=\"https://backloggery.com/siesuchtihnsex\"><span style=\"\">https://backloggery.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://anotepad.com/note/read/af8kjsi4\"><span style=\"\">https://anotepad.com/note/read/af8kjsi4</span></a></p>\n<p><a href=\"https://tabbles.net/users/siesuchtihnsex/\"><span style=\"\">https://tabbles.net/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://stuv.othr.de/pad/s/FZyvfODbD\"><span style=\"\">https://stuv.othr.de/pad/s/FZyvfODbD</span></a></p>\n<p><a href=\"https://pastebin.com/u/siesuchtihnsex\"><span style=\"\">https://pastebin.com/u/siesuchtihnsex</span></a></p>\n<p><a href=\"https://paper.wf/siesuchtihnsex/siesuchtihnsex\"><span style=\"\">https://paper.wf/siesuchtihnsex/siesuchtihnsex</span></a></p>\n<p><a href=\"https://wall.page/I25N1y\"><span style=\"\">https://wall.page/I25N1y</span></a></p>\n<p><a href=\"https://pad.libreon.fr/s/L-NQzC7l4\"><span style=\"\">https://pad.libreon.fr/s/L-NQzC7l4</span></a></p>\n<p><a href=\"https://www.komoot.com/user/5623534467822\"><span style=\"\">https://www.komoot.com/user/5623534467822</span></a></p>\n<p><a href=\"https://www.haikudeck.com/presentations/MXDF7fhHD1\"><span style=\"\">https://www.haikudeck.com/presentations/MXDF7fhHD1</span></a></p>\n<p><a href=\"https://siesuchtihnsex.elbloglibre.com/profile\"><span style=\"\">https://siesuchtihnsex.elbloglibre.com/profile</span></a></p>\n<p><a href=\"https://www.harimajuku.com/profile/suciaaturo737535/profile\"><span style=\"\">https://www.harimajuku.com/profile/suciaaturo737535/profile</span></a></p>\n<p><a href=\"https://listium.com/@siesuchtihnsex\"><span style=\"\">https://listium.com/@siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.flyingv.cc/users/1447582\"><span style=\"\">https://www.flyingv.cc/users/1447582</span></a></p>\n<p><a href=\"https://openwhyd.org/u/69cdfc429851a99eee9dca05\"><span style=\"\">https://openwhyd.org/u/69cdfc429851a99eee9dca05</span></a></p>\n<p><a href=\"https://electroswingthing.com/profile/\"><span style=\"\">https://electroswingthing.com/profile/</span></a></p>\n<p><a href=\"https://leetcode.com/u/siesuchtihnsex/\"><span style=\"\">https://leetcode.com/u/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://dlive.tv/siesuchtihnsex\"><span style=\"\">https://dlive.tv/siesuchtihnsex</span></a></p>\n<p><a href=\"https://jobs.nefeshinternational.org/employers/4091126-siesuchtihnsex\"><span style=\"\">https://jobs.nefeshinternational.org/employers/4091126-siesuchtihnsex</span></a></p>\n<p><a href=\"https://experiment.com/users/siesuchtihnsex\"><span style=\"\">https://experiment.com/users/siesuchtihnsex</span></a></p>\n<p><a href=\"https://propterest.com.au/user/78290/siesuchtihnsex\"><span style=\"\">https://propterest.com.au/user/78290/siesuchtihnsex</span></a></p>\n<p><a href=\"https://pubhtml5.com/homepage/captq/\"><span style=\"\">https://pubhtml5.com/homepage/captq/</span></a></p>\n<p><a href=\"https://wakelet.com/@siesuchtihnsex\"><span style=\"\">https://wakelet.com/@siesuchtihnsex</span></a></p>\n<p><a href=\"https://gifyu.com/siesuchtihnsex\"><span style=\"\">https://gifyu.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.experts123.com/portal/u/siesuchtihnsex\"><span style=\"\">https://www.experts123.com/portal/u/siesuchtihnsex</span></a></p>\n<p><a href=\"https://all4.vip/p/page/view-persons-profile?id=119680\"><span style=\"\">https://all4.vip/p/page/view-persons-profile?id=119680</span></a></p>\n<p><a href=\"https://www.speedrun.com/users/siesuchtihnsex\"><span style=\"\">https://www.speedrun.com/users/siesuchtihnsex</span></a></p>\n<p><a href=\"http://palangshim.com/space-uid-5074089.html\"><span style=\"\">http://palangshim.com/space-uid-5074089.html</span></a></p>\n<p><a href=\"https://qna.habr.com/user/siesuchtihnsex\"><span style=\"\">https://qna.habr.com/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://telegra.ph/siesuchtihnsex-04-02\"><span style=\"\">https://telegra.ph/siesuchtihnsex-04-02</span></a></p>\n<p><a href=\"https://paste.lightcast.com/view/81d6e0f0\"><span style=\"\">https://paste.lightcast.com/view/81d6e0f0</span></a></p>\n<p><a href=\"https://its-my.link/@siesuchtihnsex\"><span style=\"\">https://its-my.link/@siesuchtihnsex</span></a></p>\n<p><a href=\"https://filesharingtalk.com/members/634711-siesuchtihnsex\"><span style=\"\">https://filesharingtalk.com/members/634711-siesuchtihnsex</span></a></p>\n<p><a href=\"https://siesuchtihnsex.newgrounds.com/\"><span style=\"\">https://siesuchtihnsex.newgrounds.com/</span></a></p>\n<p><a href=\"https://www.wowonder.xyz/siesuchtihnsex\"><span style=\"\">https://www.wowonder.xyz/siesuchtihnsex</span></a></p>\n<p><a href=\"https://vimeo.com/siesuchtihnsex\"><span style=\"\">https://vimeo.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://ioninja.com/forum/user/siesuchtihnsex\"><span style=\"\">https://ioninja.com/forum/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://userstyles.world/user/siesuchtihnsex\"><span style=\"\">https://userstyles.world/user/siesuchtihnsex</span></a></p>\n<p><a href=\"https://coub.com/siesuchtihnsex\"><span style=\"\">https://coub.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://transfur.com/Users/siesuchtihnsex\"><span style=\"\">https://transfur.com/Users/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.horticulturaljobs.com/employers/4091152-siesuchtihnsex\"><span style=\"\">https://www.horticulturaljobs.com/employers/4091152-siesuchtihnsex</span></a></p>\n<p><a href=\"https://habr.com/ru/users/siesuchtihnsex/\"><span style=\"\">https://habr.com/ru/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://careers.coloradopublichealth.org/profiles/8097748-siesuchtsex\"><span style=\"\">https://careers.coloradopublichealth.org/profiles/8097748-siesuchtsex</span></a></p>\n<p><a href=\"https://www.elektroenergetika.si/UserProfile/tabid/43/userId/1446264/Default.aspx\"><span style=\"\">https://www.elektroenergetika.si/UserProfile/tabid/43/userId/1446264/Default.aspx</span></a></p>\n<p><a href=\"http://mura.hitobashira.org/index.php?siesuchtihnsex\"><span style=\"\">http://mura.hitobashira.org/index.php?siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.bandsworksconcerts.info/index.php?siesuchtihnsex\"><span style=\"\">https://www.bandsworksconcerts.info/index.php?siesuchtihnsex</span></a></p>\n<p><a href=\"https://graph.org/siesuchtihnsex-04-02-2\"><span style=\"\">https://graph.org/siesuchtihnsex-04-02-2</span></a></p>\n<p><a href=\"https://chanylib.ru/ru/forum/user/22587/\"><span style=\"\">https://chanylib.ru/ru/forum/user/22587/</span></a></p>\n<p><a href=\"https://casualgamerevolution.com/user/siesuchtihnsex\"><span style=\"\">https://casualgamerevolution.com/user/siesuchtihnsex</span></a></p>\n<p><a href=\"http://fort-raevskiy.ru/community/profile/siesuchtihnsex/\"><span style=\"\">http://fort-raevskiy.ru/community/profile/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://challonge.com/siesuchtihnsex\"><span style=\"\">https://challonge.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.are.na/siesucht-sex/channels\"><span style=\"\">https://www.are.na/siesucht-sex/channels</span></a></p>\n<p><a href=\"https://skeptikon.fr/a/siesuchtihnsex/video-channels\"><span style=\"\">https://skeptikon.fr/a/siesuchtihnsex/video-channels</span></a></p>\n<p><a href=\"https://medium.com/@siesuchtihnsex/about\"><span style=\"\">https://medium.com/@siesuchtihnsex/about</span></a></p>\n<p><a href=\"http://gojourney.xsrv.jp/index.php?siesuchtihnsex\"><span style=\"\">http://gojourney.xsrv.jp/index.php?siesuchtihnsex</span></a></p>\n<p><a href=\"https://zb3.org/siesuchtihnsex/siesuchtihnsex-nn4n\"><span style=\"\">https://zb3.org/siesuchtihnsex/siesuchtihnsex-nn4n</span></a></p>\n<p><a href=\"http://arahn.100webspace.net/profile.php?mode=viewprofile&amp;u=243201\"><span style=\"\">http://arahn.100webspace.net/profile.php?mode=viewprofile&amp;u=243201</span></a></p>\n<p><a href=\"https://www.designspiration.com/siesuchtihnsex/\"><span style=\"\">https://www.designspiration.com/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://dinosquadsuriku.com/?siesuchtihnsex\"><span style=\"\">https://dinosquadsuriku.com/?siesuchtihnsex</span></a></p>\n<p><a href=\"https://potofu.me/siesuchtihnsex\"><span style=\"\">https://potofu.me/siesuchtihnsex</span></a></p>\n<p><a href=\"https://writexo.com/share/1d84c1bbb30a\"><span style=\"\">https://writexo.com/share/1d84c1bbb30a</span></a></p>\n<p><a href=\"https://socialgem.net/siesuchtihnsex\"><span style=\"\">https://socialgem.net/siesuchtihnsex</span></a></p>\n<p><a href=\"https://forum.delftship.net/Public/users/siesuchtihnsex/\"><span style=\"\">https://forum.delftship.net/Public/users/siesuchtihnsex/</span></a></p>\n<p><a href=\"http://freestyler.ws/user/644043/siesuchtihnsex\"><span style=\"\">http://freestyler.ws/user/644043/siesuchtihnsex</span></a></p>\n<p><a href=\"https://willysforsale.com/author/siesuchtihnsex/\"><span style=\"\">https://willysforsale.com/author/siesuchtihnsex/</span></a></p>\n<p><a href=\"https://wirtube.de/a/siesuchtihnsex/video-channels\"><span style=\"\">https://wirtube.de/a/siesuchtihnsex/video-channels</span></a></p>\n<p><a href=\"https://www.instapaper.com/p/siesuchtihnsex\"><span style=\"\">https://www.instapaper.com/p/siesuchtihnsex</span></a></p>\n<p><a href=\"http://iawbs.com/home.php?mod=space&amp;uid=950962\"><span style=\"\">http://iawbs.com/home.php?mod=space&amp;uid=950962</span></a></p>\n<p><a href=\"https://community.goldposter.com/members/siesuchtihnsex/profile/\"><span style=\"\">https://community.goldposter.com/members/siesuchtihnsex/profile/</span></a></p>\n<p><a href=\"https://www.party.biz/index.php/profile/378922?tab=541\"><span style=\"\">https://www.party.biz/index.php/profile/378922?tab=541</span></a></p>\n<p><a href=\"https://issuu.com/siesuchtihnsex\"><span style=\"\">https://issuu.com/siesuchtihnsex</span></a></p>\n<p><a href=\"https://dq10wiki.net/wiki/?siesuchtihnsex\"><span style=\"\">https://dq10wiki.net/wiki/?siesuchtihnsex</span></a></p>\n<p><a href=\"https://fact-finder.xyz/pukiwiki/?siesuchtihnsex\"><span style=\"\">https://fact-finder.xyz/pukiwiki/?siesuchtihnsex</span></a></p>\n<p><a href=\"https://te.legra.ph/siesuchtihnsex-04-02-3\"><span style=\"\">https://te.legra.ph/siesuchtihnsex-04-02-3</span></a></p>\n<p><a href=\"https://writeablog.net/siesuchtihnsex/siesuchtihnsex\"><span style=\"\">https://writeablog.net/siesuchtihnsex/siesuchtihnsex</span></a></p>\n<p><a href=\"https://zenwriting.net/siesuchtihnsex/siesuchtihnsex\"><span style=\"\">https://zenwriting.net/siesuchtihnsex/siesuchtihnsex</span></a></p>\n<p><a href=\"https://postheaven.net/siesuchtihnsex/siesuchtihnsex\"><span style=\"\">https://postheaven.net/siesuchtihnsex/siesuchtihnsex</span></a></p>\n<p><a href=\"https://fora.babinet.cz/profile.php?section=personal&amp;id=120154\"><span style=\"\">https://fora.babinet.cz/profile.php?section=personal&amp;id=120154</span></a></p>\n<p><a href=\"https://noti.st/siesuchtihnsex\"><span style=\"\">https://noti.st/siesuchtihnsex</span></a></p>\n<p><a href=\"https://siesuchtihnsex.blogspot.com/2026/04/siesuchtihnsex.html\"><span style=\"\">https://siesuchtihnsex.blogspot.com/2026/04/siesuchtihnsex.html</span></a></p>\n<p><a href=\"https://www.walkscore.com/people/332370158605/siesuchtihnsex\"><span style=\"\">https://www.walkscore.com/people/332370158605/siesuchtihnsex</span></a></p>\n<p><a href=\"https://www.pozible.com/profile/siesuchtsex\"><span style=\"\">https://www.pozible.com/profile/siesuchtsex</span></a></p>\n<p><a href=\"https://co-roma.openheritage.eu/profiles/siesuchtihnsex/activity\"><span style=\"\">https://co-roma.openheritage.eu/profiles/siesuchtihnsex/activity</span></a></p>\n<p><a href=\"https://scam.vn/check-website/https://siesuchtihnsex.net/\"><span style=\"\">https://scam.vn/check-website/https://siesuchtihnsex.net/</span></a></p>",
        "topics": [],
        "user": {
            "pk": 166429,
            "forum_user": {
                "id": 166192,
                "user": 166429,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/3115faea436f45bd8cd8b31ef199cc6b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-02T14:51:30.251990+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "siesuchtihnsex",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sie-sucht-ihn-fur-sex",
        "pk": 4582,
        "published": false,
        "publish_date": "2026-04-02T14:56:52.414785+02:00"
    },
    {
        "title": "Freyja - Sergei Leonov",
        "description": "Performance audiovisuelle basée sur la sonification de données biologiques",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par: Sergei Leonov<br /><a href=\"https://forum.ircam.fr/profile/sergei-l/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\">Dans ce spectacle, l'auteur s'efforce de porter un regard neuf sur la musique sacr&eacute;e. Avec l'aide des technologies les plus r&eacute;centes, il permet &agrave; la partie inconnue de la vie de s'adresser au public par le biais du son. L'interpr&egrave;te est un chef d'orchestre qui se r&eacute;f&egrave;re &agrave; la plante et, &agrave; travers elle, &agrave; la m&eacute;taphysique, en utilisant une vari&eacute;t&eacute; de m&eacute;thodes sonores et &eacute;motionnelles, &eacute;tablissant ainsi un lien entre les deux parties. Il est question de pri&egrave;re d&eacute;sesp&eacute;r&eacute;e et d'exp&eacute;rience scientifique, d'&eacute;tats transcendants et de codes de programmation, d'&eacute;lectricit&eacute; en tant qu'&eacute;nergie primordiale de l'univers et en tant que ressource pour les machines.</p>\r\n<p style=\"text-align: justify;\">La plupart des sons sont produits par des plantes. Les &eacute;lectrodes qui y sont fix&eacute;es permettent d'enregistrer leurs diagrammes et ces informations sont ensuite utilis&eacute;es pour synth&eacute;tiser des sons, produire des cvs pour modifier les param&egrave;tres d'un synth&eacute;tiseur modulaire pendant l'ex&eacute;cution, des effets sonores, etc. En utilisant une vari&eacute;t&eacute; de sons, l'artiste tente d'influencer leurs rythmes biologiques et de susciter leur r&eacute;action. De cette mani&egrave;re, la technologie permet aux parties inconnues de la vie de s'adresser &agrave; un public par le biais du son. Le titre de l'&oelig;uvre fait r&eacute;f&eacute;rence &agrave; la d&eacute;esse scandinave de la nature Freyja.</p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1776,
                "name": "biodata sonification",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 565,
                "name": "Biofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 297,
                "name": "Electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 127,
                "name": "Video",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20807,
            "forum_user": {
                "id": 20796,
                "user": 20807,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/s_2136789581-gs_7-is_30-u_0-istr_0.8-oi_1-m_luna-diffusion-Female_robot_face.jpeg",
                "avatar_url": "/media/cache/9c/9c/9c9c88bcc9e6594ecd70ddf0b7622f7c.jpg",
                "biography": "Born in 1994, graduated from the College of Music and the Superior State Conservatory of Saint-Petersburg, then from the Geneva University of Music in the composition class of Luis Naon, Gilbert Nouno and Michael Jarrell. Since childhood, he has participated in the musical life of Saint Petersburg as a chorister and pianist. During his studies, he was a member of numerous ensembles performing contemporary music and baroque music at concerts and festivals in Russia and Europe. As a performer, he participates in a wide variety of cultural events - from theatrical productions to the openings of visual art exhibitions. Since 2019, Sergei has participated in Geneva's cultural life, focusing mainly on electronic and mixed music. He composes music for instrumental and vocal ensembles with electronics, creates sound installations and live sets in the field of science-art. In his work, he combines various techniques of repetitive music with the exploration of the spectre of sound, often blending folk and scientific themes, and experiments with pop synthesizer sounds with classical instrumental timbres.",
                "date_modified": "2025-07-21T15:33:13.818113+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sergei-l",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "freyja",
        "pk": 2729,
        "published": true,
        "publish_date": "2024-02-14T16:16:11+01:00"
    },
    {
        "title": "Acoustics for musicians: from concert halls to virtual realities by Benoit Alary",
        "description": "With the emergence of immersive technologies, the acoustics we encounter daily in the real world coexist with artificial ones. From large multichannel systems for live music to navigating a scene in augmented reality, the technologies we use to reproduce the aesthetic experience of an acoustic space are quickly evolving to adapt to new realities. In his presentation, Benoit Alary (researcher, IRCAM) will review some of the technologies being developed at IRCAM to create immersive reverberation.",
        "content": "<div class=\"WordSection1\">\r\n<div>With the emergence of immersive technologies, the acoustics we encounter daily in the real world coexist with artificial ones. From large multichannel systems for live music to navigating a scene in augmented reality, the technologies we use to reproduce the aesthetic experience of an acoustic space are quickly evolving to adapt to new realities. In his presentation, Benoit Alary (researcher, IRCAM) will review some of the technologies being developed at IRCAM to create immersive reverberation.&nbsp;<span lang=\"EN-US\">We will discuss key aspects of room acoustics, how we perceive them, and how a space can be reproduced either realistically or creatively. </span>With these tools, we want to explore fresh paradigms for creating complex spatial reverberation that can evolve with the creative language emerging from designing new immersive experiences.<br /><br /><o:p></o:p></div>\r\n<div>\r\n<div><img src=\"/media/uploads/espro_1-0x520.jpg\" alt=\"\" width=\"780\" height=\"520\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></div>\r\n<div>\r\n<h6 style=\"text-align: center;\"><strong>L'espace de projection &copy; &Eacute;ric Laforge</strong></h6>\r\n<div></div>\r\n</div>\r\n<div></div>\r\n&nbsp;</div>\r\n</div>",
        "topics": [
            {
                "id": 95,
                "name": "Acoustics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 403,
                "name": "Reverberation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 24564,
            "forum_user": {
                "id": 24537,
                "user": 24564,
                "first_name": "Benoit",
                "last_name": "Alary",
                "avatar": "https://forum.ircam.fr/media/avatars/BA_2021_06.jpg",
                "avatar_url": "/media/cache/27/b3/27b31b6ef7aaf23499bed29603125e56.jpg",
                "biography": "Benoit Alary is a researcher in the Acoustic and Cognitive Spaces team of the STMS lab, part of IRCAM. He has over fifteen years of experience in immersive audio, shared between industry and academia, including a Ph.D. in acoustics and signal processing from Aalto University (Finland) and an MSc from the University of Edinburgh. His research centers around sound reproduction, analysis/synthesis, and perception. His current projects involve artificial reverberation, 6DoF sound reproduction, machine learning, and virtual acoustics.",
                "date_modified": "2025-11-07T10:18:43.509252+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 317,
                        "forum_user": 24537,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-07",
                        "type": 0,
                        "keys": [
                            {
                                "id": 566,
                                "membership": 317
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "balary",
            "first_name": "Benoit",
            "last_name": "Alary",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3070,
                    "user": 24564,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "benoit-alary",
        "pk": 3058,
        "published": true,
        "publish_date": "2024-10-23T10:59:19+02:00"
    },
    {
        "title": "Celestial Armillary and Ubiquitous Wave",
        "description": "Celestial Armillary and Ubiquitous Wave is a multimedia experience combining Spatial Audio and Virtual Reality experience that explores the cognition of sound and cosmology.",
        "content": "<p><strong><em>Celestial Armillary and Ubiquitous Wave </em></strong><span style=\"\">is a multimedia two-perspective (on-site/ virtual) experience of the same theme that explores the cognition of sound and the cosmos in a multisensory context.</span></p>\n<p>&nbsp;</p>\n<p><span style=\"\">This project is inspired by modern and ancient Chinese observational cosmology,&nbsp; we hope to translate the model through sound and visual language to create a new version&mdash; a passage that can link the past and now. In the 4th century B.C., Chinese ancients began to use the armillary sphere to measure and interpret celestial objects. It was used to construct perceptions of the external world. In this age of modern technology, astronomical data measurement and sonification are also iterating to explore the human-universe relationship. A new awareness of the universe is provoked by utilizing Higher-Order Ambisonics(HOA) sound experience and Virtual Reality experience. These two experiences perform in parallel and create a mirror heterotopia.</span></p>\n<p>&nbsp;</p>\n<p><span style=\"\"><img alt=\"\" src=\"/media/uploads/user/4e34f3646a5dec0e54c1583520410748.jpg\"></span></p>\n<p>&nbsp;</p>\n<p><span style=\"\">The first spatial sound experience transports the audience to the center of a giant armillary in space. At the same time, the moving image creates a &ldquo;remeasurement&rdquo; of the Asian Astronomical map and responds to the experimental music. The rotation of the giant armillary sphere accompanies with various Chinese instruments, such as the Zither, Flute, Xun, Drums, etc. Starting from the Sun, it will take the audience on a slow Astronomical sound wave journey through the nine planets.</span></p>\n<p><span style=\"\"><img alt=\"\" src=\"/media/uploads/user/63bd20f53360b3fd8b3585bd82e0f385.png\"></span></p>\n<p><span style=\"\">In our second virtual reality experience, the ambisonic sound and interactive experience immerse people into a world of space measurement. By 6Dof, each step the audience takes changes the armillary sphere and its distance from the planets. The audience is encouraged to use their entities in the virtual space to measure the space-time transformation of the universe. Through the perception of spatial changes in the armillary sphere and the planet, this experience amplifies the sense of embodiment.</span></p>\n<p>&nbsp;</p>\n<p><span style=\"\">Playing a role as a key, the multi-sensory experience will open the threshold to a cosmic archaeological experience of the universe for the audiences.</span></p>",
        "topics": [
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 301,
                "name": "Virtual reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27090,
            "forum_user": {
                "id": 27063,
                "user": 27090,
                "first_name": "Cainy",
                "last_name": "Yiru Yan",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_0761_%E5%89%AF%E6%9C%AC.JPG",
                "avatar_url": "/media/cache/63/8f/638f3b80b67e2eeb3a0e7ab0f789aaa6.jpg",
                "biography": "Cainy Yiru Yan is a London-based interdisciplinary artist and immersive experience designer. Her practice spans extended reality (XR), audiovisual installations, sculptural practices, spatial sound, photography, film, documentary, digital art, live performances, and art prints. She explores overlooked narratives through post-existentialist thought, holistic systems, and Daoist philosophy, creating environments that dissolve the boundaries between materiality, spirituality, temporality, and human experience. Grounded in these philosophical foundations, her work investigates the fluid and interdependent relationships between space, material, memory, and human perception. Rather than imposing narratives, she invites audiences to encounter environments where decay and renewal, stillness and transformation, coexist. Through immersive technologies, spatial atmospheres, and multi-sensory experiences, Cainy crafts poetic spaces that invite audiences to engage with the invisible layers of memory, nature, and transformation. Her work has been exhibited internationally at venues such as IRCAM at the Centre Pompidou (FR), Kühlhaus Berlin (DE), the Royal Birmingham Society of Artists (UK), Flor",
                "date_modified": "2025-05-03T18:20:29.531747+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "yanyiru",
            "first_name": "Cainy",
            "last_name": "Yiru Yan",
            "bookmarks": []
        },
        "slug": "celestial-armillary-and-ubiquitous-wave-2",
        "pk": 2041,
        "published": false,
        "publish_date": "2023-02-06T19:23:21.595203+01:00"
    },
    {
        "title": "Apollo e Marsia by Jonathan Impett (Orpheus Institute)",
        "description": "Apollo e Marsia is a hybrid installation investigating the dynamic and creative nature of performance memory. The musicians (viola d’amore and alto flute) have just finished a mortal performance contest (a story from Ovid). They are heard from two opposing screens, their evolving memories of what they have just played informed by each other, processed through each other’s mode of acoustic production (strings and tubes) and two mutually listening AIs. The individual past produces emergent artefacts in the common present.",
        "content": "<p><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><em>Apollo e Marsia</em> addresses the fundamentally enacted, imaginative and contextual nature of musical memory, its constant reorganising of time, its internal loops and leaps of focus.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"Tintoretto: la gara tra Apollo e Marsia (c.1545)\" src=\"https://forum.ircam.fr/media/uploads/user/f32171236e91d7d18873afac4e3c3709.jpg\" /></p>\r\n<p>The image derives from a Tintoretto painting depicting Apollo and Marsyas in a contest to see which is the greater musician, as recounted by Ovid. We see the moment after they have finished playing but before the judgement; one will die. In this instant, both are remembering, reconstructing, reimagining their own performance and that of the other, their impressions and memories modulating each other into new formations. This installation thus deals with the generative, nonlinear memory of performance in compressed time and under stress.</p>\r\n<p>&nbsp;</p>\r\n<p>In its complete form, <em>Apollo e Marsia </em>is a hybrid installation consisting of two 85-inch screens, positioned opposite each other. One screen displays a filmed performance by a musician playing a viola d&rsquo;amore, the other playing an alto flute, performances of roughly 23 and 25 minutes respectively. Each screen is surrounded by a pair of loudspeakers mounted at ear level, a microphone above the screen and a physical instrument. The speakers present the stereo live sound untreated. Two further channels of processed instrumental sound are played through large physical instruments: that of the viola d&rsquo;amore through a vertical pair of long transparent acrylic tubes, that of the alto flute through a horizontal instrument, likewise transparent, with two 3-meter strings.</p>\r\n<p><img alt=\"Apollo e Marsia - EPFL Pavilions, Lausanne, 2024-5\" src=\"https://forum.ircam.fr/media/uploads/user/ad4748694d87005535dac4eed1fc5225.jpg\" /></p>\r\n<p>The musical material derives from two Delphic hymns to Apollo, subject to multiple layers of mediation. Wave models drive the generation of emergent material though iterative and nonlinear processes operating in both symbolic (Open Music) and sonic (Max) domains. Each performer hears themselves through the means of sound production of the other - the large physical instruments here simulated by filters. The memories of both performers are in constant evolution as AIs on each side listen to the whole in situ, responding through the physical instruments on the basis of impressions accumulated through each 24-hour period.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 428,
            "forum_user": {
                "id": 428,
                "user": 428,
                "first_name": "Jonathan",
                "last_name": "Impett",
                "avatar": "https://forum.ircam.fr/media/avatars/Jonathan_Impett_c_Gilles_Anquez_copy.jpg",
                "avatar_url": "/media/cache/25/07/25070349eff80aa4f0b6b8495c59bab3.jpg",
                "biography": "Jonathan Impett is Director of Research at The Orpheus Institute, Ghent, where he leads the research group “Music, Thought and Technology”. A composer, trumpet player and writer, his work is concerned with the discourses and practices of contemporary musical creativity, particularly the nature of the technologically-situated musical artefact. His early (1992) development of the ‘metatrumpet’ constituted one of the first projects to explore instrument, performer, interaction technologies and composer as a single creative space, necessitating new conceptual models. \n\nAs a performer he is active as an improviser, as a soloist and in historical performance. His recent monograph on the music of Luigi Nono is the first comprehensive study of the composer’s work. Jonathan is currently working on a project considering the nature of the contemporary musical object, ‘the work without content’. \n\nActivity in the space between composition and improvisation has led to continuous research in the areas of interactive systems, interfaces and modes of collaborative performance. Recent works combine installation, live electronics and computational models with notated and improvised performance.",
                "date_modified": "2026-03-04T15:16:14.462034+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 669,
                        "forum_user": 428,
                        "date_start": "2026-01-08",
                        "date_end": "2027-01-08",
                        "type": 0,
                        "keys": [
                            {
                                "id": 1161,
                                "membership": 669
                            },
                            {
                                "id": 1160,
                                "membership": 669
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "jimpett",
            "first_name": "Jonathan",
            "last_name": "Impett",
            "bookmarks": []
        },
        "slug": "apollo-e-marsia",
        "pk": 4424,
        "published": false,
        "publish_date": "2026-02-24T16:48:31+01:00"
    },
    {
        "title": "Introducing the Mode/Scale sofware by Eric Montalbetti & Serge Lemouton",
        "description": "A computer assisted composition tool to study all the characteristics and possibilities of a given system of modes and scales, and to analyse any musical fragment accordingly.",
        "content": "<p><strong>Introducing the Mode/Scale software developed by Serge Lemouton (programmer) &amp; Eric Montalbetti (composer) in Max/Bach</strong></p>\r\n<p><em>As many other composers of my generation (let&rsquo;s say the composers born from the late 50s to early 70s &ndash; I was born in 1968), I have tried to conciliate the double heritage of serial and modal music&nbsp;: serial as meant in the music of Arnold Sch&ouml;nberg as well as developed by all the so-called Darmstadt composers, and modal as inherited from Faur&eacute;, Debussy and, above all, Olivier Messiaen.</em></p>\r\n<p><em>In the previous generation, Luciano Berio and Witold Lutoslawski had already reintroduced harmonic thinking in the serial composition process, through the concepts of polarity (Berio) and notes reservoirs (Lutoslawski) which are very important to me.</em></p>\r\n<p><em>But I also realized that there is a kind of immanent logic in the organization of the 12 tones in the study of all the possible iterative scales of level 1 or 2, that is the scales which can be developed on the basis of a simple pattern made of a succession of tones and semi-tones, excluding the repetition of more than 2 same intervals. </em></p>\r\n<p><em>I call them &laquo;&nbsp;mode/scales&nbsp;&raquo; (&laquo;&nbsp;mod&eacute;chelles&nbsp;&raquo; in French) because some are octaving modes (reproducing exactly the same notes from an octave to another), and others are non-octaving scales, meaning that the suite of notes can be different from an octave to another, which of course depends on whether the basic pattern of the mode/scale is made within a subdivision of the octave or not.</em></p>\r\n<p><em>I have numbered the mode/scales after the interval in which its basic pattern is defined (from 0 for chromatism to 11 for the octave), adding a letter when there are different possible patterns within the same interval.</em></p>\r\n<p><em>There are precisely only 8 octavian modes and 10 non-octavian scales following this rule, which I call mode/scales (mod&eacute;chelles), so this is a closed and coherent harmonic system.</em></p>\r\n<p><em>Most of them are of limited transpositions, and its name also gives the number of its limited transpositions (from 0 for chromatism and 1 for the full-tone scale to 10 transpositions for the 10a, 10b and 10c scales, and of course 11 transpositions for the modes 11a, 11b and 11c composed within the octave).</em></p>\r\n<p><em><img alt=\"Eric Montalbetti list of iterative modes &amp; scales\" src=\"https://forum.ircam.fr/media/uploads/user/10e62a2d4c881f41d3b7e7151e83ad0e.png\" /></em></p>\r\n<p><em>Please note that I refused to systematically reduce the definition of the mode/scales to the octave (as opposed to Messiaen for instance, who did not escape the influence of tonal music in this respect) because I do not see a proper reason to follow this tradition since we are composing with the 12 tones. On the contrary, we can find in the structure of each mode/scales a rational justification for organizing the 12 tones on the full instrumental range with possible or excluded octave liner.</em></p>\r\n<p><em>Besides, the mode/scales being iterative scales, the notion of fundamental note is not always relevant and very much depends on the context, but all the possible permutations of the basic pattern can be seen as one of the limited transpositions of the same mode/scale. Which simplifies our analysis.</em></p>\r\n<p><em>Starting from this point of view, our software makes it possible to further explore all the mode/scales, to compare them, to note their characteristics, to deduce the greater or lesser relationships, that is to say rules of modulation or possible sequences with each other, or on the contrary oppositions, and to develop their potential to compose chords from more or less complex arpeggios.</em></p>\r\n<p><em>We can in fact develop from the mode/scales simple arpeggios (every 2 notes, every 3 notes, etc.) as well as derived arpeggios, respecting the same proportions as the basic pattern specific to each mode/scale, following a chosen number of notes (for example every 2 and every 4 notes applied to mode 3 (uu-) are the 1st, 3rd, 5th and 9th notes etc.)</em></p>\r\n<p><em>We can even play with the various permutations of each mode/scale in the construction of its derived arpeggios (which can be especially relevant depending on which note or degree of the basic pattern you start from).</em></p>\r\n<p><em>And it is possible to derive a second time a derived arpeggio. </em></p>\r\n<p><em><img alt=\"Modes_&eacute;chelles Tab\" src=\"https://forum.ircam.fr/media/uploads/user/98a0bfb5589843ddab35fd943df63105.png\" /></em></p>\r\n<p><em>It is obviously interesting to be able to check the order of appearance of the 12 sounds as you go through these mode/scales or their various arpeggios. You can choose from which note to start the evaluation and the software will automatically select and color the notes in their order of appearance.</em></p>\r\n<p><em><img alt=\"Total_chromatique_ad Tab\" src=\"https://forum.ircam.fr/media/uploads/user/9401b1c300245d1fe873ae3c4ebd599b.png\" /></em></p>\r\n<p><em>It is also interesting to compare two different results and check how many notes they have in&nbsp; common, whether between 2 mode/scales or between 2 arpeggios.</em></p>\r\n<p><em><img alt=\"Onglet ap intersection ad\" src=\"https://forum.ircam.fr/media/uploads/user/37192696ffbf2604feadc395c3376daa.png\" /></em></p>\r\n<p><em>It is also interesting to identify the characteristic symmetries of certain derived arpeggios, what the software will show graphically.</em></p>\r\n<p><em><img alt=\"Onglet Sym&eacute;tries\" src=\"https://forum.ircam.fr/media/uploads/user/cf9011d0d22fbfac3a55d490c292f96e.png\" /></em></p>\r\n<p><em>Finally, we may want to reorder the results in one way or another (rectus, retrograde, from the center to the ends or from the ends to the center, or even distributed randomly until satisfaction, etc).</em></p>\r\n<p><em><img alt=\"Onglet Ordonnancement\" src=\"https://forum.ircam.fr/media/uploads/user/7156cfd0d61a98e33a9763854212da47.png\" /></em></p>\r\n<p><em>With the &ldquo;maquette&rdquo; tab, you can also copy/paste several successive results to then export them together in a score editor.</em></p>\r\n<p><em>And we can also explore the interest of superimposing different models on several voices (polymodality).</em></p>\r\n<p><em><img alt=\"Onglet Maquette\" src=\"https://forum.ircam.fr/media/uploads/user/ee351cbea0853eed294dacfd90205b66.png\" /></em></p>\r\n<p><em>Please also note that it is always possible to send the result from one tab to another to study something from several perspectives. </em></p>\r\n<p><em>Conversely, the software makes it possible to analyze a given musical fragment (defined as precise pitches or as a series of intervals in absolute terms) to know to which mode/scale or even to which simple arpeggio or derived arpeggio of a mode/scale it belongs.</em></p>\r\n<p><em><img alt=\"Onglet ME search\" src=\"https://forum.ircam.fr/media/uploads/user/f76f74909dabbcf4d427a6da04f23659.png\" /></em></p>\r\n<p><em>This software will remain in development during the composition of an important cycle of piano pieces to be first performed in June 2026, but </em><em>a first version will hopefully be made available on the Ircam Forum after the coming summer.</em></p>\r\n<p><em>In the future, we plan to allow other composers to use this software by entering their own harmonic system, that is to say to compose their own library of mode/scales specific to their own style.</em></p>\r\n<p><em>For instance, we may wish to open the system to iterative modes of factor 3, as well as to more &ldquo;classic&rdquo; designed modes (Major, minor, Dorian, Andalusian, etc.).</em></p>\r\n<p><em>It is even already possible to study iterative scales using micro-intervals.</em></p>\r\n<p><em>Hoping you will enjoy this new approach to harmonic thinking.</em></p>\r\n<p><a href=\"http://www.ericmontalbetti.com/\">www.ericmontalbetti.com</a></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 379,
                "name": "Analysis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 954,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 362,
                "name": "Harmony",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 474,
                "name": "Modes",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 363,
                "name": "Scales",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4499,
            "forum_user": {
                "id": 4496,
                "user": 4499,
                "first_name": "Eric",
                "last_name": "Montalbetti",
                "avatar": "https://forum.ircam.fr/media/avatars/2022_04_28_-_%C3%89ric_Montalbetti_-_Portraits_EIC_-_ALauriol-16.jpeg",
                "avatar_url": "/media/cache/a9/09/a909af8295d6d4deacb1449db3442ce9.jpg",
                "biography": "Since 2015, Eric Montalbetti has the joy of hearing his scores coming to life thanks to wonderful performers such as violinists Christian Tetzlaff, Tedi Papavrami, David Grimal, cellists Marc Coppey, Henri Demarquette, Alban Gerhardt, Truls Mørk, Tanja Tetzlaff, winds Emmanuel Pahud, Viola Wilmsen, Pierre Génisson, pianists Momo Kodama or François-Frédéric Guy, as well as conductors Pierre Bleuse, Lionel Bringuier, Renaud Capuçon, Mikko Franck, Yasuaki Itakura, Jonathan Nott, Pascal Rophé, François-Xavier Roth, Nikolaj Szeps-Znaider, Pierre-André Valade or Kazuki Yamada.\nOver 25 different scores have already been scheduled in the Berlin Boulez Saal, Spannungen festival or Kölner Philharmonie in Germany, Wiener Konzerthaus in Austria, Amsterdam Muziekgebouw in the Netherlands, Tonhalle in Zürich, Lugano LAC, Geneva Victoria Hall and Lausanne in Switzerland, Prussia Cove IMS in Great-Britain, Enescu festival in Bucharest, or in Belgium, Denmark, Italy, Monte Carlo, Spain, Slovenia, Lebanon, Korea and several times in Japan - as well as in France from Philharmonie de Paris to Lyon, Toulouse or Montpellier.\nMusic published by Durand and Allegretto / 2 CD albums on Alpha Classics",
                "date_modified": "2025-08-19T10:40:36.000816+02:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 763,
                        "forum_user": 4496,
                        "date_start": "2014-11-16",
                        "date_end": "2026-06-30",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "ericmontalbetti",
            "first_name": "Eric",
            "last_name": "Montalbetti",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 114,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 78,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 33,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 214,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 387,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 39,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2766,
                    "user": 4499,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2718,
                    "user": 4499,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "introducing-the-modescale-sofware-by-eric-montalbetti-serge-lemouton",
        "pk": 2854,
        "published": true,
        "publish_date": "2024-03-26T10:45:04+01:00"
    },
    {
        "title": "Niches Acoustiques: urban soundscape design, or, composing (with) the sonic landscape of a public square in Paris. (Niches Acoustiques I)",
        "description": "NYU Ircam Forum 2022 contribution by Nadine Schütz.\r\n\r\nAssociated Articles: Inform and evaluate a public space sound installation through perceptual evaluations, an art x science collaboration. (Niches Acoustiques II)",
        "content": "<p><img alt=\"Niches Acoustiques: a perennial sound installation dedicated to the forecourt of the new courthouse (Tribunal Judiciaire) in Paris.\" src=\"/media/uploads/user/847bb9b2869d0dc5c32186dd10f1fa08.jpg\" /></p>\r\n<p>The perennial sound installation entitled \"Niches Acoustiques\", a winning project of the Participatory Budget of the City of Paris, is dedicated to an urban public space: the square of the new courthouse (Tribunal Judiciaire) of Paris, located in the north-western part of the capital. This square represents a complex challenge: serving a landmark building while at the same time being part of the development of a new urban neighbourhood. The installation, a work on spatial listening and perceptual sound masking which is currently being composed, contributes to shaping the identity of this new public space. It creates an appeasing and varied auditory foreground that sends annoying monotonous noises into the background while opening up the courthouse square to an urban narrative that connects the territories rather than separating them. This installation is a winning project of the Participatory Budget of the City of Paris. During this presentation, I will introduce the development/context and the detailed artistic proposal of \"Niches Acoustiques\" and address specific challenges related to the creative process of what might be called non-intrusive urban (land) sound design. I will situate this work within my artistic research at IRCAM-STMS, which investigates methodological questions of composing in(to) existing sonic environments. This investigation also led to the scientific and artistic collaboration applied to the \"Niches Acoustiques\" project, currently implemented within the Perception and Sound Design team in the framework of Valerian Fraisse's thesis on the information and evaluation of public space sound installations.</p>\r\n<p><img alt=\"HOA site recordings and acoustic measurements with the IRCAM Perception and Sound Design Team and doctoral candidate Val&eacute;rian Fraisse.\" src=\"/media/uploads/user/165b929fd771c9f72d504fe39a797c23.jpg\" /></p>",
        "topics": [
            {
                "id": 919,
                "name": "art research collaboration",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 920,
                "name": "landscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 915,
                "name": "NYU",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 524,
                "name": "Design et traitement sonores",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 815,
                "name": "soundscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 918,
                "name": "urban",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17607,
            "forum_user": {
                "id": 17604,
                "user": 17607,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sonic_Topologies_1257_b_cutsquare_smallsmall.jpg",
                "avatar_url": "/media/cache/b4/99/b499fa45336c40f5a3857c39a793e3a0.jpg",
                "biography": "Nadine Schütz is a sound artist, architect and composer from Switzerland, based in Paris. She explores the auditory landscape like an environmental interpreter and composes by developing the acoustic qualities and ambiences of a site. Space and place become thus a creative score that informs and directs its own transformation. Her compositions, performances and scenographic sound work have been presented in Zurich, Paris, London, Venice, Naples, New York, Moscow, Tokyo and Kyoto. Within urban development projects, her interventions combine the artistic reading of a site with the concern for augmenting its acoustic comfort and identity. Through an original combination of techniques derived from bio- and psychoacoustics, music, sculpture and landscape architecture, she creates sound installations and acoustic designs that participate tangibly in users' daily experiences. Nadine holds a PhD in landscape acoustics from ETH Zurich, where she installed a new studio for the spatial simulation of sonic landscapes. She teaches at ETH Zurich and Parsons Paris and is currently a guest composer in the Acoustic-and-Cognitive-Spaces and the Perception-and-Sound-Design Teams at IRCAM-STMS.",
                "date_modified": "2024-03-21T11:01:29.312466+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 766,
                        "forum_user": 17604,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "ns_echora",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "niches-acoustiques-urban-soundscape-design-or-composing-with-the-sonic-landscape-of-a-public-square-in-paris-niches-acoustiques-i",
        "pk": 1365,
        "published": true,
        "publish_date": "2022-09-20T08:57:25+02:00"
    },
    {
        "title": "Conference - CINE - CONCERT - Metropolis - Buñuel triptych - Chaplin Factory by Martin Matalon",
        "description": "Cette conférence abordera les relations et rapports entre  musique et image et plus spécifiquement le ciné-concert.",
        "content": "<blockquote type=\"cite\">\r\n<h2 style=\"text-align: left;\"><span><b></b></span><strong>Conference - CINE-CONCERT&nbsp;- Metropolis - Bu&ntilde;uel triptych -&nbsp;Chaplin Factory</strong></h2>\r\n<h2 style=\"font-weight: 400;\"><strong>- Martin Matalon</strong></h2>\r\n<h2 style=\"font-weight: 400;\"><strong></strong><span><img src=\"/media/uploads/cince_concert_chaplin_martin_matalon.png\" width=\"812\" height=\"452\" /></span></h2>\r\n<div style=\"text-align: left;\">\r\n<blockquote type=\"cite\">\r\n<div>\r\n<p>This conference will explore the relationships between music and images and, more specifically, the concept of cine-concert.</p>\r\n</div>\r\n<div>\r\n<p style=\"font-weight: 400;\">A movie contains a large number of elements and is capable of both suggesting form and generating matter.</p>\r\n<p style=\"font-weight: 400;\">The musical form and its articulation can be defined by simple data such as the number and duration of the scenes that make up the movie, or by more complex data such as the editing, with its techniques and rhythm, the framing, the play of light and shadow, the composition of the shots, their architecture, their material, the acting, the atmosphere...</p>\r\n<p style=\"font-weight: 400;\">Another pillar of the movie's construction emerges through the script and its relationship with the narrative and its conventions: Unity of space - time - character concept - narrative pockets or abstraction and its visual implications.</p>\r\n<p style=\"font-weight: 400;\">The script will also define the psychological, dramatic, humorous or political content of the movie.&nbsp;All these elements, and many others, are likely to generate and suggest musical material and form.</p>\r\n<p style=\"font-weight: 400;\">The ways in which a composer can respond to a movie are just as rich and varied. Is it better to go against the director's grain to get a particular idea across ?</p>\r\n<p style=\"font-weight: 400;\">Is it not redundant in some parts to write music that &lsquo;parallels&rsquo; the images ? Why does it work in some places and not in others ?</p>\r\n<p style=\"font-weight: 400;\">Given all this, can we keep our freedom and write music that is not purely functional, but that meets our most intimate needs as composers while maintaining a friendly relationship with the&nbsp;movie ?&nbsp;</p>\r\n<p style=\"font-weight: 400;\"><span style=\"text-decoration: underline;\">More information about the movie :</span>&nbsp;<a href=\"https://manifeste.ircam.fr/en/news/martin-matalon-en-studio46/\">https://manifeste.ircam.fr/en/news/martin-matalon-en-studio46/</a></p>\r\n</div>\r\n</blockquote>\r\n</div>\r\n</blockquote>",
        "topics": [],
        "user": {
            "pk": 88876,
            "forum_user": {
                "id": 88769,
                "user": 88876,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d41d8cd98f00b204e9800998ecf8427e?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-10-23T11:00:54.123455+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "mmatalon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "conference-cine-concert-metropolis-bunuel-triptych-chaplin-factory-by-martin-matalon",
        "pk": 3049,
        "published": true,
        "publish_date": "2024-10-22T15:03:03+02:00"
    },
    {
        "title": "“Lenna” (2019): A 22.2ch sound installation under the Creative Commons license by Miyu Hosoi",
        "description": "Focusing on the orientation and dispersion of sound images, this spatial musical work was made using multiple audio channels, and only the artist's voice as a sound source. It represents at once an attempt to encourage the creation of multichannel acoustic contents, and the theoretical and practical development of audiovisual environments.",
        "content": "<p></p>\r\n<p style=\"text-align: center;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9cc71d20ecb917303d3a6076c25856f3.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: center;\"><em><sub>Miyu Hosoi \"Lenna\"(2023), Photo: Richard Chang, Photo courtesy of fundesign.tv</sub></em></p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Miyu Hosoi &amp;ldquo;Lenna&amp;rdquo; (2019), Photo: KIOKU Keizo, Photo Courtesy: NTT InterCommunication Center [ICC]\" src=\"https://forum.ircam.fr/media/uploads/user/6198e41d36d3e959903b67a7dd1b623b.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: center;\"><sub><em>Miyu Hosoi &ldquo;Lenna&rdquo;(2019), Photo: KIOKU Keizo, Photo Courtesy: NTT InterCommunication Center [ICC]</em></sub></p>\r\n<p style=\"text-align: left;\">&nbsp;</p>\r\n<p style=\"text-align: left;\">While sound systems used to be based on such standard formats as mono (1 channel), stereo (2 channels) and surround (5.1 channels), this work adopts the 22.2 channel surround format that was first implemented in the audio production and transmission of NHK&rsquo;s 8K Satellite Broadcasting programs. It is, however, a format that the average music creator and listener rarely has a chance to use both as a production and a playback environment. In this exhibition, visitors can experience the work via a system that reproduces its sound stage in a different (2-channel) format.</p>\r\n<p style=\"text-align: left;\">Based on the fact that there still exist only few audio samples that are compatible with the 22.2 channel format, the work was made with the &ldquo;conception and implementation of acoustic creation and listening environments&rdquo; in mind. The title was borrowed from the name of a female model whose photo is widely used as a standard test image in the field of image processing. Through the free distribution and Creative Commons licensed secondary use of 22.2 channel sound data, and concrete measures such as experiments with remixing and converting, &ldquo;Lenna&rdquo; aims not only to serve as a sample for multichannel works in the future, but also to inspire endeavors that help stimulate the discussion on environments and distribution of new audiovisual formats.</p>\r\n<p style=\"text-align: left;\"><sub><em>Text quoted from <a href=\"https://www.ntticc.or.jp/en/archive/works/lenna/\">https://www.ntticc.or.jp/en/archive/works/lenna/</a></em></sub></p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p style=\"text-align: center;\"><img alt=\"Miyu Hosoi &amp;ldquo;Lenna&amp;rdquo; 2019 Photo: Yasuhiro Tani Photo Courtesy: Yamaguchi Center for Arts and Media[YCAM]\" src=\"https://forum.ircam.fr/media/uploads/user/26da4cb57aaf69b582617c12736c5f20.jpg\" /></p>\r\n<p style=\"text-align: center;\"><em><sub>Miyu Hosoi &ldquo;Lenna&rdquo;(2019), Photo: Yasuhiro Tani, Photo Courtesy: Yamaguchi Center for Arts and Media[YCAM]</sub></em></p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p style=\"text-align: center;\"><strong><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/d06034b1ea610efb26e9e5c5e0af6d4d.jpg\" /></strong></p>\r\n<p style=\"text-align: center;\"><em><sub>Miyu Hosoi \"Lenna\"(2020), Photo: RYOICHI KAWAJIRI, Photo courtesy: Sapporo Cultural Arts Community Center SCARTS</sub></em></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Credit</strong></p>\r\n<p>Lenna(2019)<br />Concept/Voice/Recording: Miyu Hosoi<br />Composer: Chikara Uemizutaru<br />Mix: Toshihiko Kasai, Misaki Hasuo<br />3D Audio Design: Misaki Hasuo<br />3D Sound System: Jiro Kubo (ACOUSTIC FIELD)<br />Recording assistant: Akihiro Iizuka<br />Mastering: Moe Kazama</p>",
        "topics": [
            {
                "id": 3506,
                "name": "22.2ch",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 22,
                "name": "Voice",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 87770,
            "forum_user": {
                "id": 87666,
                "user": 87770,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/MiyuHosoi_01_1200.jpg",
                "avatar_url": "/media/cache/15/cc/15cc175678389c17fb6b7a860b12e54b.jpg",
                "biography": "Born in 1993, based in Tokyo, sound artist Miyu HOSOI creates works featuring multiple recordings of her own voice, sound installations using multi-channel sound systems, outdoor installa-tions, performing arts productions, focusing on the way sound transforms the percep-tion of space and situations.\nHer works have been presented at Barbican Centre London, Tokyo International Haneda Airport, Tokyo Metropoli-tan Hibiya Park, Nagano Prefectural Art Museum, Audio Engineering Society[AES], NTT InterCommunication Center[ICC] Anechoic Room, Yamaguchi Center for Arts and Media[YCAM], Aichi Arts Center and more.  In 2024, on stage as a performer at La Biennale di Venezia – Danza 2024, for the theater piece “Tangent” by Shiro Takatani(DUMB TYPE).",
                "date_modified": "2025-11-04T18:05:33.476931+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "miyuhosoi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "lenna-2019-a-222ch-sound-installation-under-the-creative-commons-license-by-miyu-hosoi",
        "pk": 3775,
        "published": true,
        "publish_date": "2025-10-06T12:11:09+02:00"
    },
    {
        "title": "«FERAL FREQUENCIES by Wilding AI» represented by Alexandre Saunier (FR/CH) and Maurice Jones (DE/CA)",
        "description": "FERAL FREQUENCIES is an AI-driven spatial sound composition developed and presented by the Wilding AI collective.",
        "content": "<p></p>\r\n<p>Large Language Models (LLMs) are central to text-to-sound systems like text-to-speech and text-to-music, reshaping musical practices through prompt-based interaction while raising concerns about authenticity, automation, and the ethical use of artists&rsquo; work. Wilding AI is a public research-creation project bringing together artists, researchers, engineers, and students to explore speculative AI futures. In contrast to techno-solutionist uses of AI, Wilding AI treats LLMs as compositional tools rather than sound generators, integrating them into Max, Ableton Live, and spatial audio environments to control parameters such as 3D sound motion. This intervention presents the collective sound installation FERAL FREQUENCIES, which puts the system into action.</p>\r\n<p>Following a year-long research-creation process culminating in a two week residency at Laboratoire formes &middot; ondes at Universit&eacute; de Montr&eacute;al, FERAL FREQUENCIES demonstrates the aesthetic, technical, and practical implemention of the collective&rsquo;s developed capabilities in AI-driven sound spatialization. The composition traverses four key themes the collective explored: Emotional Sovereignty ; Data That Matters ; The Algorithmic Shape of Stories ; and Breaking Machines / Making Kin.&nbsp;</p>\r\n<p>The Wilding AI Collective consists of Beth Coleman, Maurice Jones, Alexandre Saunier, Portrait XO, Daniela Huerta, Sahar Homami, Debashis Sinha, Pia Baltazar, Nao Tokui, Gadi Sassoon, Heu Hsu, and Federico Visi.</p>\r\n<p>The residency and presentation of FERAL FREQUENCIES is supported by the &laquo; Laboratoire formes &middot; ondes &raquo; at Universit&eacute; de Montr&eacute;al.</p>\r\n<p>The development of FERAL FREQUENCIES at the Society for Arts and Technology is funded by the Minist&egrave;re de l'&Eacute;conomie, de l'Innovation et de l'&Eacute;nergie, in partnership with MA Sc&egrave;ne Nationale.</p>\r\n<p>The Wilding AI project is made possible by round 14 of the Goethe-Institut International Coproduction Fund, and supported by Concordia University, MONOM Studios, 4DSOUND, and Neutone Inc.</p>",
        "topics": [],
        "user": {
            "pk": 49988,
            "forum_user": {
                "id": 49928,
                "user": 49988,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4ec5fa7c8b25316591a8eff907277438?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-10-15T17:54:50.214334+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "mauriceee",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "feral-frequencies-by-wilding-ai-represented-by-alexandre-saunier-frch-and-maurice-jones-deca",
        "pk": 3740,
        "published": true,
        "publish_date": "2025-10-03T10:37:45+02:00"
    },
    {
        "title": "Oscleton, une application compagnon d'Ableton Live - Arthur Vimond",
        "description": "Oscleton est une application mobile qui accompagne Ableton Live.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" />Pr&eacute;sent&eacute; par : Arthur Vimond<br /><a href=\"https://forum.ircam.fr/profile/arthurvimond/\">Biographie</a></p>\r\n<p>Oscleton est une application mobile pour Ableton Live.</p>\r\n<p>Son objectif principal est de fournir une application compagnon sans fil simple mais compl&egrave;te pour surveiller et contr&ocirc;ler le mixage de votre set Ableton Live en temps r&eacute;el, comme les param&egrave;tres de l'appareil, le volume des pistes et les envois, ou de parcourir votre biblioth&egrave;que Live pour rechercher, pr&eacute;visualiser et charger des &eacute;chantillons, des clips, des pr&eacute;r&eacute;glages d'instruments dans votre set Live.</p>\r\n<p>Dans cet expos&eacute;, j'expliquerai l'impl&eacute;mentation technique de cette solution, en me concentrant principalement sur un script MIDI Remote d'Ableton Live personnalis&eacute; utilisant l'API Python de Live qui communique avec l'application mobile via le protocole Open Sound Control.</p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 793,
                "name": "android",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 54849,
            "forum_user": {
                "id": 54787,
                "user": 54849,
                "first_name": "Arthur",
                "last_name": "Vimond",
                "avatar": "https://forum.ircam.fr/media/avatars/Arthur_Vimond_profile_centered.jpg",
                "avatar_url": "/media/cache/62/a0/62a06da26f3562dde867f711b725f094.jpg",
                "biography": "After studying sound engineering and Chinese, I started to learn programming by myself in 2012, and since, I'm specializing in Mobile Development (Android & iOS). I love to create clean, well-structured and reactive code, following the best practices regarding architecture and design. I'm also interested in augmented and virtual reality, creative coding and electronic music production.",
                "date_modified": "2024-03-18T21:13:17.826772+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "arthurvimond",
            "first_name": "Arthur",
            "last_name": "Vimond",
            "bookmarks": []
        },
        "slug": "oscleton-an-ableton-live-companion-app",
        "pk": 2717,
        "published": true,
        "publish_date": "2024-02-12T11:37:23+01:00"
    },
    {
        "title": "macOS signing and release guide 2020",
        "description": "Recently as part of the final preparations for a release, we've had to go through the arduous task of complying to Apple's codesigning rules. Its very easy to find yourself drowning in the scattered and verbose documentation available (and many blog posts / StackOverflow answers are out dated because it changes so often), so I've put together a quick guide outlining how to complete app distribution on macOS outside of the app store.\r\n\r\nNote: For macOS AppStore & iOS distribution there is some overlap with what is described here - but require a different set of certificates and some additional steps.",
        "content": "<h1 id=\"macossigningandreleaseguide2020\">macOS signing and release guide 2020</h1>\r\n<p>This guide is focused on creating and signing a macOS application for distribution outside of the AppStore. This is most easily achieved by building + exporting through Xcode, which makes this painful experience just a little bit easier. But it is also entirely possible to achieve via the command line which won't be covered here but I will provide a relevant link at the bottom.</p>\r\n<h3 id=\"account\">Account</h3>\r\n<p>In order to be able to build an application for distribution your Apple Developer account needs to&nbsp;have \"Access to Certificates, Identifiers &amp; Profiles\" ticked in Developer Resources.</p>\r\n<h3 id=\"identifiersprovisioningprofilesandcertificates\">Identifiers, Provisioning profiles and certificates</h3>\r\n<p>For distributing an application outside the mac AppStore a Developer ID certificate is required. Apps signed in this way are evaluated by GateKeeper when a user attempts to install the application.</p>\r\n<p>The following steps can be achieved in the <a href=\"https://developer.apple.com/account/resources/identifiers/list\">Certificates, Identifiers &amp; Profiles</a> page on the Apple Developer website.</p>\r\n<ol>\r\n<li>Create an identifier for your application. This is what uniquely identifies an application in Apple's ecosystem.&nbsp;</li>\r\n<li>Create a provisioning profile (per user) with the type <code>Developer ID Application</code> for distribution and with the App ID set to the identifier created in step 1.</li>\r\n<li>Create a signing certificate (per user).</li>\r\n</ol>\r\n<ul>\r\n<li>Generate a Certificate Request from the Keychain Access utility:\r\n<ul>\r\n<li>Keychain Access menu</li>\r\n<li>Certificate Assistant</li>\r\n<li>Request a Certificate From a Certificate Authority.</li>\r\n</ul>\r\n</li>\r\n<li>Fill in your details leaving the CA email blank.</li>\r\n<li>Save to disk. This creates a .certSigningRequest file</li>\r\n<li>On the Apple Developer website choose \"Create a New Certificate\" with the type \"Developer ID Application\" under distribution. When prompted upload the .certSigningRequest file created in the previous step.</li>\r\n</ul>\r\n<h3 id=\"stepsinxcode\">Steps in Xcode</h3>\r\n<p>In Xcode go to Xcode menu -&gt; preferences -&gt; accounts. Sign into your account if not done so already. Click Download manual profiles and then Manage Certificates. The distribution certificate we just created should be visible in the pop-up window.</p>\r\n<h4 id=\"deployreleasebuilds\">Deploy &amp; Release builds</h4>\r\n<p>We're mainly focused on signing our app for distribution. But we can also sign for debug + release modes:</p>\r\n<ul>\r\n<li>Click on your Target</li>\r\n<li>Under the Signing Debug / Release menu select the provisioning profile we created above. Xcode should also resolve the Signing certificate. If not check the dropdown</li>\r\n<li>If this step fails you may need to create specific development certificates.</li>\r\n</ul>\r\n<h4 id=\"archiving\">Archiving</h4>\r\n<ol>\r\n<li>\r\n<p>Select Product menu -&gt; archive</p>\r\n</li>\r\n<li>\r\n<p>If / when step 1. succeeds open Window -&gt; Organiser where you can find all of your macOS archives. Select the one you wish to export and click <code>Distribute App</code></p>\r\n</li>\r\n<li>\r\n<p>Select <code>Developer ID</code> as the method of distribution.</p>\r\n</li>\r\n<li>\r\n<p>It is recommended that you click <code>Upload</code> in order to have the application validated by Apple's notary service*. Note if you select this option you must wait for the service to complete and send you a notification back that it has completed. (<code>Export</code> will immediately create your signed application).</p>\r\n</li>\r\n<li>\r\n<p>Select your <code>Distribution Certificate</code> and the <code>Provisioning Profile</code> for your app from the two dropdown menus. Upload to Apple for validation.</p>\r\n</li>\r\n<li>\r\n<p>Once the validation is completed from the Organiser window click <code>Export Notarized App</code>.</p>\r\n<p>* It is even essential for &gt; macOS 10.14.5</p>\r\n<pre><code>Beginning in macOS 10.14.5, software signed with a new Developer ID certificate\r\nand all new or updated kernel extensions must be notarized to run.\r\nBeginning in macOS 10.15, all software built after June 1, 2019, and distributed\r\nwith Developer ID must be notarized.\r\n</code></pre>\r\n</li>\r\n</ol>\r\n<h3 id=\"furtherreading\">Further Reading:</h3>\r\n<p>All of the above has been assembled out of trial and error and reading Apples' scattered documentation / stack overflow posts</p>\r\n<ul>\r\n<li><a href=\"https://help.apple.com/xcode/mac/current/#/dev033e997ca\">Distribute outside the mac app store</a></li>\r\n<li><a href=\"https://developer.apple.com/library/archive/technotes/tn2206/_index.html#//apple_ref/doc/uid/DTS40007919\">Code signing in depth</a></li>\r\n<li><a href=\"https://developer.apple.com/library/archive/documentation/Security/Conceptual/CodeSigningGuide/Procedures/Procedures.html\">Code signing guide</a></li>\r\n<li><a href=\"https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution?preferredLanguage=occ\">Notarizing macOS software</a></li>\r\n<li><a href=\"https://developer.apple.com/library/archive/technotes/tn2339/_index.html\">Building from the command line</a></li>\r\n</ul>",
        "topics": [],
        "user": {
            "pk": 17947,
            "forum_user": {
                "id": 17941,
                "user": 17947,
                "first_name": "Matthew",
                "last_name": "Harris",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6e9c4db5d8711662520fe2fdf34ef827?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-03T12:57:54.107807+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 217,
                        "forum_user": 17941,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "harris",
            "first_name": "Matthew",
            "last_name": "Harris",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 17947,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "macos-signing-and-release-guide-2020",
        "pk": 444,
        "published": true,
        "publish_date": "2020-01-21T11:21:35+01:00"
    },
    {
        "title": "(Re)negotiating space: a polyphonic audio program - Adaiya Granberry, Nathanael Amadou Kliebhan, Eddy Ade Akin",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>(Re)negotiating space: a polyphonic audio program is an immersive multi-sensory experience that explores the dynamic qualities of spatialized sound to transcend cultural boundaries and colonial borders. Throughout history, Black people have created and innovated new modes of communication, evading while simultaneously being shaped by our shared histories of colonization and exploitation. This project aims to put the sonic consequences of this history in conversation &ndash; appreciating moments of both harmony and dissonance to highlight our trajectories of radical assemblage and resilience. What are the sociocultural implications of sound? How can sound be employed in the movement towards collective liberation? What are the possibilities for a Black sonic lingua franca? Bridging a myriad of fragmented sounds, voices, and visuals, (re)negotiating space is a disembodied representation of the vibrant sonic nuances of the Black diaspora.</p>",
        "topics": [],
        "user": {
            "pk": 33004,
            "forum_user": {
                "id": 32956,
                "user": 33004,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/c2b8355e26cc5b9a9a020f2b700770f2?s=120&d=retro",
                "biography": "Adaiya Granberry is a Black and Filipino artist-researcher-storyteller originally from Tacoma, WA, USA / currently based in East London. Rooted in a Black ecofeminist praxis, her work is an interrogation of transcendent alternative futures characterized by a radical sense of care, love, and interdependence. Using moving image, sound, and installation, her practice grapples with personal and collective legacies of colonial violence and extraction. She is a current MA student at the Royal College of Art and received her BA with distinction from Duke University.",
                "date_modified": "2023-03-26T13:27:01+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "adaiya",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "renegotiating-space-a-polyphonic-audio-program-adaiya-granberry-nathanael-amadou-kliebhan-eddy-ade-akin",
        "pk": 2132,
        "published": true,
        "publish_date": "2023-03-13T16:15:24+01:00"
    },
    {
        "title": "Oiseaux hurleurs - Jsuk Han",
        "description": "Ce projet vise à utiliser les algorithmes de rétroaction sonore et de flocage et à les appliquer à des systèmes multicanaux et à simuler des phénomènes naturels.",
        "content": "<p></p>\r\n<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Jsuk Han&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/jhan/\">Biographie</a></p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/69d75794917bb0cd4108ad4f6e827cef.png\" /></p>\r\n<p style=\"text-align: center;\"><em><sub>&lt;Logistic Feedback&gt;, Platform-L Platform Live, Seoul, KR</sub></em></p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: justify;\">Ce projet vise &agrave; utiliser les algorithmes de r&eacute;troaction sonore et de flocage, &agrave; les appliquer &agrave; des syst&egrave;mes multicanaux et &agrave; simuler des ph&eacute;nom&egrave;nes naturels. Le feedback audio, &eacute;galement appel&eacute; hurlement, est litt&eacute;ralement un &eacute;tat dans lequel un microphone d'entr&eacute;e et un haut-parleur de sortie sont connect&eacute;s l'un &agrave; l'autre. Si vous augmentez la valeur du gain tout en vous faisant face, la fr&eacute;quence de r&eacute;sonance correspondant &agrave; l'appareil est naturellement g&eacute;n&eacute;r&eacute;e. Ce qui est unique, c'est que dans le cas d'un seul canal (un haut-parleur et un microphone), une simple onde sinuso&iuml;dale qui r&eacute;sonne avec les caract&eacute;ristiques du support est g&eacute;n&eacute;r&eacute;e, mais lorsque l'on passe &agrave; plusieurs canaux, la fr&eacute;quence est transform&eacute;e de mani&egrave;re plus complexe.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b9a2707994900e2cfdf04d8838e73e76.png\" />.&nbsp; &nbsp;<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0010b3cb11d15c6536154f6358b9f9b5.gif\" /></p>\r\n<p style=\"text-align: center;\"><em><sub>IRCAM SPAT5</sub></em><em><sub><span>&nbsp;</span>viewer</sub></em><em><sub>(left),<span>&nbsp;</span></sub></em><em><sub>Flocking</sub></em><em><sub><span>&nbsp;</span>Algorithem</sub></em><em><sub><span>&nbsp;</span>Boid</sub></em><em><sub>(Right)</sub></em></p>\r\n<p>&nbsp;</p>\r\n<p style=\"text-align: justify;\">En utilisant le syst&egrave;me Ambisonics, un type d'audio spatial, chaque canal de haut-parleur et de microphone peut former un r&eacute;seau qui constitue un espace virtuel. En particulier, en utilisant la fonction panoramique du programme SPAT de l'IRCAM, vous pouvez sp&eacute;cifier la position du haut-parleur dans un espace virtuel au sein du programme, et la position de l'entr&eacute;e sonore (microphone) repr&eacute;sent&eacute;e par les points verts peut &eacute;galement &ecirc;tre sp&eacute;cifi&eacute;e en temps r&eacute;el. Dans ce projet, le son de retour sera contr&ocirc;l&eacute; en appliquant le comportement du groupe de l'&eacute;cosyst&egrave;me par le biais de l'algorithme de flocage (Boid) utilisant un syst&egrave;me de particules aux valeurs X,Y,Z des coordonn&eacute;es d'entr&eacute;e audio. On aura l'impression qu'une vol&eacute;e d'oiseaux se rassemble et nage dans l'espace ambisonique, g&eacute;n&eacute;rant des sons en retour. Ces sons imitent divers ph&eacute;nom&egrave;nes observ&eacute;s dans la nature, et les sons g&eacute;n&eacute;r&eacute;s ici pr&eacute;senteront divers aspects, allant de simples sons ambiants sinuso&iuml;daux &agrave; des motifs chaotiques complexes.</p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong>&nbsp;</p>",
        "topics": [
            {
                "id": 1758,
                "name": "algorithmic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1756,
                "name": "audio feedback",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1757,
                "name": "boid",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 38648,
            "forum_user": {
                "id": 38597,
                "user": 38648,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profile03.jpg",
                "avatar_url": "/media/cache/e5/37/e5370105d6ecdc638849d782dca505c5.jpg",
                "biography": "JSUK HAN creates works of sculpture, installation, and sound performance using sound equipment that he has personally collected or produced, including speakers and microphones. Exploring sound output devices and the properties of sound, he has based his creations on research into equipment for converting electrical signals into sound waves and into the physical vibrations of speakers and waves of sound. He has used phenomena of light, sound, vibration, and resonance to spatially represent normally undetectable feedback loops as a form of communication (inputs and outputs, transmission and reception). Han participated in the 2020 ARKO Art Center feature exhibition Follow, Flow, Feed and held the solo exhibition Feedbacker: Ambitious Borderer at the OCI Museum of Art. He has recently been broadening the scope of his work through collaborations with artists in fields such as architecture, circuses, DJing, and subcultures.",
                "date_modified": "2025-12-24T04:35:40.639089+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 720,
                        "forum_user": 38597,
                        "date_start": "2024-02-07",
                        "date_end": "2026-02-07",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "jhan",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "howlingbirds",
        "pk": 2723,
        "published": true,
        "publish_date": "2024-02-13T08:09:38+01:00"
    },
    {
        "title": "Brise de particules - Jan Ove Hennig",
        "description": "Utilisation de systèmes de particules pour positionner dynamiquement des sons dans l'espace à l'aide de la bibliothèque Spat",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;&nbsp;Jan Ove Hennig<br /><a href=\"https://forum.ircam.fr/profile/kabuki/\">Biographie</a></p>\r\n<p>La suite Spat est un outil puissant permettant de r&eacute;partir le son dans l'espace en fonction de r&egrave;gles complexes. Une fois qu'une source sonore a &eacute;t&eacute; d&eacute;finie et positionn&eacute;e, elle peut &ecirc;tre utilis&eacute;e comme &eacute;metteur de formes d'ondes g&eacute;n&eacute;r&eacute;es en temps r&eacute;el ou de tampons audio contenant du mat&eacute;riel pr&eacute;enregistr&eacute;. Il n'est pas facile de trouver une m&eacute;thodologie qui permette &agrave; l'auditeur de comprendre intuitivement la logique d'un ensemble de r&egrave;gles tout en rendant le processus g&eacute;rable pour un compositeur ou un interpr&egrave;te.</p>\r\n<p>L'objectif de ce projet &eacute;tait d'&eacute;tudier les concepts qui s'appuient sur des r&egrave;gles physiques pour calculer la position des sources sonores individuelles. Parmi plusieurs &eacute;tudes, l'utilisation de syst&egrave;mes de particules comme principe sous-jacent au mouvement du son a donn&eacute; les r&eacute;sultats les plus prometteurs. C'est ainsi qu'est n&eacute;e \"Particle Breeze\", une installation sonore immersive command&eacute;e &agrave; l'origine par Genelec Japan. L'objectif &eacute;tait de cr&eacute;er des exp&eacute;riences uniques en temps r&eacute;el pour les clients visitant leur salle d'exposition Atmos &agrave; Tokyo. Les notes individuelles de la composition se comportent selon un syst&egrave;me de particules et traversent la pi&egrave;ce en se basant sur des concepts d'attraction et de friction. Tous les aspects du syst&egrave;me de particules sont calcul&eacute;s &agrave; l'aide de la bo&icirc;te &agrave; outils Jitter fournie avec Max/MSP. Le but de cette installation &eacute;tait d'explorer un concept de son immersif qui n'est pas bas&eacute; sur un enregistrement st&eacute;r&eacute;o qui est ensuite spatialis&eacute;, ou un enregistrement qui conserve la qualit&eacute; spatiale de sa pi&egrave;ce. Au lieu de cela, la position de chaque note est aussi cruciale que les composantes de hauteur et d'amplitude d&eacute;sign&eacute;es.</p>\r\n<p>J'ai depuis d&eacute;velopp&eacute; ce concept et l'ai adapt&eacute; &agrave; la configuration &agrave; 24 haut-parleurs du&nbsp;SoundLab de la City University de Hong Kong, o&ugrave; les sons ont &eacute;t&eacute; g&eacute;n&eacute;r&eacute;s en direct par un synth&eacute;tiseur modulaire, puis spatialis&eacute;s sous forme de particules par le patch Max/MSP. Ce projet m'a montr&eacute; qu'il est possible de manipuler la position des notes individuelles dans l'espace d'une mani&egrave;re qui soit significative &agrave; la fois pour l'interpr&egrave;te et pour l'auditeur.</p>\r\n<p></p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/ircam_forum_2024_photo_jan_hennig.png\" alt=\"\" width=\"676\" height=\"507\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1829,
                "name": "#spat",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 59124,
            "forum_user": {
                "id": 59059,
                "user": 59124,
                "first_name": "Jan Ove",
                "last_name": "Hennig",
                "avatar": "https://forum.ircam.fr/media/avatars/Kabuki_Portrait_-_Processed.jpg",
                "avatar_url": "/media/cache/d0/7f/d07f990b002b5d863a5794680b842936.jpg",
                "biography": "I'm a sound artist and music producer based in Frankfurt, Germany with a passion for sharing knowledge. I've worked as lecturer at the Abbey Road Institute in Frankfurt (with focus on Max/MSP and sound synthesis) and developed video series for Softube (Modular Sound Explorations) and Korg (Sequencing Strategies) among others. In addition to releasing music and performing live with my modular synthesizer I'm also exhibiting large-format audio installations based around my interests in 3d printing, microcontrollers and their interactions with sensors and physical objects.",
                "date_modified": "2025-12-08T20:39:01.777661+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 965,
                        "forum_user": 59059,
                        "date_start": "2024-10-17",
                        "date_end": "2025-10-17",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "kabuki",
            "first_name": "Jan Ove",
            "last_name": "Hennig",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2759,
                    "user": 59124,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "particle-breeze",
        "pk": 2759,
        "published": true,
        "publish_date": "2024-02-19T21:54:11+01:00"
    },
    {
        "title": "ASAP tutorials",
        "description": "ASAP is a set of audio plug-ins that allows transforming sound in a creative way. You are invited to play with the sound representation and the synthesis parameters in order to generate new sounds. The plug-ins can also be used in order to correct the defaults of the sound and to improve audio rendering. Thanks to the ARA2 integration, the spectral transformations are integrated into your editing workflow.",
        "content": "<p><span><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/p2Xic7EV4mA\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></span></p>\r\n<p><span><a href=\"https://forum.ircam.fr/projects/detail/asap/\">ASAP</a> </span>contains:</p>\r\n<ul>\r\n<li><strong><strong>Pitches Brew&nbsp;[Premium]</strong>:<span>&nbsp;</span></strong><span>The<span>&nbsp;plugin, based on the ARA2 extension, allows you to draw and edit pitches and formants with great precision</span><span>. This final version includes a marker system that makes editing curves faster and easier. In addition, it is possible to load frequency curves and markers in JSON, CSV, and CUE formats allowing exchanges not only with Partiels but also with many DAWs.</span></span><span>.&nbsp;</span><a href=\"https://www.youtube.com/watch?v=qQgqTrGgc3o&amp;list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat&amp;index=2\">Check out the video tutorial!</a><br /><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/qQgqTrGgc3o\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></li>\r\n<li><strong><strong>Spectral Surface [Premium]</strong>: </strong>The plug-in allows you to<strong>&nbsp;</strong><em>draw shape filters on the sound spectrogram</em><strong>&nbsp;</strong>and to control their gain and the fades. The sound representation and the effect interface, made possible thanks to the ARA 2 plug-in extension, allow the creation of very<strong>&nbsp;</strong><em>complex and precise surface filters</em><strong>&nbsp;</strong>to reduce or increase specific parts of the spectral components of the sound. The plug-in can be used to<strong>&nbsp;</strong><em>compensate for annoying artifacts</em>&nbsp;in the sound as well as to&nbsp;<em>transform the sound creatively</em>.&nbsp;<a href=\"https://www.youtube.com/watch?v=1KDq8iZbEZ4&amp;list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat&amp;index=2\">Check out the video tutorial!</a><br /><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/6v83VELwfOg\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></li>\r\n<li><strong>Spectral Remix [Free]</strong>: The plug-in allows you to<span>&nbsp;</span><em>control the balance of the harmonic, noise, and attack components</em><span>&nbsp;</span>of the sound. Beyond many original approaches, the plug-in can be used to<span>&nbsp;</span><em>highlight or hide certain audio elements and characteristics</em><span>&nbsp;</span>such as background noise, vocals, percussive sounds, and so on.<span>&nbsp;</span><a href=\"https://www.youtube.com/watch?v=1KDq8iZbEZ4&amp;list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat&amp;index=2\">Check out the video tutorial!<br /><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/1KDq8iZbEZ4\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe><br /></a></li>\r\n</ul>\r\n<ul>\r\n<li><strong>Spectral Crossing [Premium]</strong>: The plug-in allows you to<span>&nbsp;</span><em>cross the amplitudes and the frequencies of a source sound and of a side-chain sound to generate a hybrid sound</em>. The plug-in can be used to creatively interpolate and<span>&nbsp;</span><em>transform one sound into another<span>&nbsp;</span></em>by gradually mixing the phase and amplitude components of the two audio signals.<span>&nbsp;</span><a href=\"https://www.youtube.com/watch?v=m6gY8YdYyGU&amp;list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat&amp;index=4\">Check out the video tutorial!<br /></a></li>\r\n</ul>\r\n<ul>\r\n<li><strong>Spectral Morphing [Premium]</strong>: The plug-in allows you to<span>&nbsp;</span><em>apply the spectral characteristics of a side-chain sound to a source sound in order to transform its timbre</em>. By using a voice sound as the side-chain on an instrument sound as the source, spectral morphing can be used to make the<span>&nbsp;</span><em>instrument speak</em>.<span>&nbsp;</span><a href=\"https://www.youtube.com/watch?v=m6gY8YdYyGU&amp;list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat&amp;index=4\">Check out the video tutorial!<br /></a><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/m6gY8YdYyGU\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></li>\r\n</ul>\r\n<ul>\r\n<li><strong>Spectral Clipping [Free]</strong>: The plug-in allows you to<span>&nbsp;</span><em>expand and compress the energy of spectral components</em><span>&nbsp;</span>within a range of thresholds. It can be used to<span>&nbsp;</span><em>silent low-level sounds such as background noise</em><span>&nbsp;</span>or to<span>&nbsp;</span><em>limit high-energy peaks such as high-pitched bird calls.<span>&nbsp;</span></em><a href=\"https://www.youtube.com/watch?v=qSaVkF0CQuY&amp;list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat&amp;index=1\">Check out the video tutorial!<br /></a><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/qSaVkF0CQuY\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></li>\r\n</ul>\r\n<ul>\r\n<li><strong>Formant Shaping [Premium]</strong>: The plug-in allows you to<span>&nbsp;</span><em>modify the vowels and play with the formant resonances</em><span>&nbsp;</span>of the sound. It can be used to<span>&nbsp;</span><em>change the spoken vowels of a voice</em><span>&nbsp;</span>or to<span>&nbsp;</span><em>vocalize instruments such as a drum set</em>.<span>&nbsp;</span><a href=\"https://www.youtube.com/watch?v=sBrDGryG5Kw&amp;list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat&amp;index=3\">Check out the video tutorial!<br /></a><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/sBrDGryG5Kw\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe><br /><br />The plug-in set supports multichannel processing at any common sample rate. The interface offers graphical themes to adapt to the graphical appearance of your digital audio workstations and your operating system.</li>\r\n</ul>\r\n<hr />\r\n<p>The Spectral Remix and the Spectral Clipping plug-ins are part of the Ircam Forum free membership. The other ASAP plug-ins are part of the<span>&nbsp;</span><a href=\"https://forum.ircam.fr/about/offres-dabonnement/\"><strong>Ircam Forum Premium</strong></a><span>&nbsp;</span>technologies bundle offering<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/Technologies-ircam-premium/\">many tools, plug-ins, and applications</a><span>&nbsp;</span>to analyze, synthesize, and transform the sound. The Premium subscription allows you to download all technologies, and their updates and generate activation keys during the entire subscription period. Once installed on your machine, the technologies continue to work even if your subscription is terminated, with no time limit. The free membership allows you to use the demo version of the ASAP plug-ins.</p>\r\n<hr />\r\n<p><strong>ASAP</strong><span>&nbsp;</span>is designed and developed by Pierre Guillot at<span>&nbsp;</span><a href=\"https://www.ircam.fr/innovations\">Ircam IMR Department</a><br /><strong>SuperVP</strong><span>&nbsp;</span>is designed by Axel R&ouml;bel (based on an initial version by Philippe Depalle) and developed by Axel R&ouml;bel &amp; Fr&eacute;d&eacute;ric Cornu -<span>&nbsp;</span><a href=\"https://www.stms-lab.fr/\">Ircam Analysis-Synthesis team</a><span>&nbsp;</span>of the STMS Lab hosted at IRCAM.</p>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: 622px; top: 681px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 585,
                "name": "Anasyn team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 925,
                "name": "ASAP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 134,
                "name": "Audiosculpt",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1192,
                "name": "PhaseVocoder",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 210,
                "name": "SuperVP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "asap-tutorials",
        "pk": 2100,
        "published": true,
        "publish_date": "2023-02-28T20:34:59+01:00"
    },
    {
        "title": "Somax & Co. by the REACH Team",
        "description": "This workshop presents the latest software releases within the ReachTools suite, developed by the REACH team within the Music Representation research group at IRCAM. Key demonstrations will include the new Somax for Live and Somax2Collider, alongside an analysis of advanced improvisation strategies derived from Prosax.\r\nMore info: reach.ircam.fr",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<h2>Somax for Live</h2>\r\n<p>As part of the REACH project in the Music Representation team at IRCAM, Somax for Live brings the real-time interactive capabilities of <a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">Somax2</a>&nbsp;directly into&nbsp;<span>Ableton Live</span>.&nbsp;</p>\r\n<p>Developed by&nbsp;<span>Manuel Poletti</span>&nbsp;in collaboration with&nbsp;<span>Marco Fiorini</span>&nbsp;and&nbsp;<span>G&eacute;rard Assayag</span>, this new integration bridges advanced symbolic AI improvisation with a widely used digital audio workstation, opening new creative workflows for composers, performers, and producers.</p>\r\n<p>Implemented as a collection<span>&nbsp;of Max for Live devices</span>,&nbsp;Somax for Live&nbsp;allows users to interactively co-create with the system within Live&rsquo;s native environment, combining the temporal and stylistic modeling of Somax2 with the flexibility of Live&rsquo;s clips, automations, and control interfaces. This tight coupling between musical intelligence and production tools encourages a fluid dialogue between human and machine musicianship, enabling adaptive accompaniment, generative composition, and exploratory performance practices within an accessible and modular setup.</p>\r\n<p>This presentation will showcase the architecture, interaction paradigms, and artistic use cases of&nbsp;Somax for Live, illustrating how the REACH project advances hybrid human&ndash;AI co-creativity in contemporary music-making.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9dcd1c19ff5165eddc68d505e3d722cf.png\" /></p>\r\n<h2><a href=\"https://forum.ircam.fr/projects/detail/prosax/\">Prosax</a></h2>\r\n<p>Prosax_001.maxpat is a research patch, a proof-of-concept one, intended to explore prosodic profiles on segmented audio to generate event labels for OMAX and SOMAX2. It is not a finished, a completed one. It is a work in progress. A process mainly intended for speech segmentation, but you can use it for other audio cases, with more and less success. Just explore.</p>\r\n<p>The main ideas in this work emerged from Artistic Research carried out with Val&eacute;rie Philipin (to whom we are indebted for her musical and literary expertise), from October 2023 to June 2024 (In the Reach project context), on the use of Somax2 in a spoken and sung voice context.</p>\r\n<p>This patch is an adaptation of &lt;pipo. sylseg&gt; help patch (from Mubu for Max Max Package) and based on the Nicolas Obin, Fran&ccedil;ois Lamare and Axel Roebel research [Obin, Lamare, Roebel 2013]; and needs the previous installation of the last &ldquo;MuBu For Max&rdquo; package developed by the ISMM Team at Ircam (<a href=\"https://ismm.ircam.fr/mubu/\">https://ismm.ircam.fr/mubu/</a>).. <span>&nbsp;&nbsp;</span></p>\r\n<p><a href=\"https://github.com/DYCI2/prosax\">https://github.com/DYCI2/prosax</a></p>\r\n<p>&nbsp;</p>\r\n<h2>Somax2Collider</h2>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/588888b96983258e6ecd2c34ad0f9ed4.png\" /></p>\r\n<p>Somax2Collider is a SuperCollider-based front-end designed to control the Somax2 server, enabling the real-time creation of musical agents within a dynamic multi-agent ecosystem where multiple agents can perform simultaneously. The project provides a flexible framework for spatialized multi-agent performance, live coding, and experimental co-improvisation practices, opening new perspectives for interacting with Somax-style musical agents.</p>\r\n<p>In this presentation, I will demonstrate the use of the latest version of Somax2Collider in an ambisonic environment, as well as within a system of autonomous networked loudspeakers. This system has already been used in several mixed-music compositions and improvised performances.</p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 748,
                "name": "co-creativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2788,
                "name": "Improvisation, generativity and co-creative interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4242,
                "name": "prosax",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1287,
                "name": "REACH",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4243,
                "name": "somax2collider",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4244,
                "name": "somaxforlive",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32267,
            "forum_user": {
                "id": 32219,
                "user": 32267,
                "first_name": "Marco",
                "last_name": "Fiorini",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-01-16_at_10.39.51.jpeg",
                "avatar_url": "/media/cache/e7/ed/e7ed5b0d44a066e65e188a351b8c9bb8.jpg",
                "biography": "Marco Fiorini is an Italian musician and researcher specializing in human-machine interaction in musical improvisation. \nHe is part of the Music Representation team at IRCAM in Paris, working on the ERC REACH project with a focus on Somax2. \nAs a PhD candidate at Sorbonne Université he develops co-creative instruments that foster real-time interaction between musicians and artificial agents.\nHe has collaborated with artists such as Jöelle Léandre, George Lewis, Steve Lehman, and Horse Lords. His work as guitarist, electronic musican and computer music designer have been featured at major international venues and festivals including Carnegie Hall (New York), ManiFeste (Centre Georges Pompidou, Paris), Improtech Paris-Tokyo (Tokyo University of the Arts), Klang (Royal Danish Academy of Music, Copenhagen), Mixtur (ESMUC, Barcelona).\nIn 2024, he is an invited lecturer at the Max Summer School at Tokyo Geidai University of the Arts, and in 2025 he will lead a Somax2 workshop at Berklee College of Music for the 50th anniversary of the International Computer Music Conference in Boston.\nHe holds degrees in Jazz Guitar, Electronic Music, Sound and Music Computing and Software Engineering.",
                "date_modified": "2026-02-25T18:50:33.457396+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 407,
                        "forum_user": 32219,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-01",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fiorini",
            "first_name": "Marco",
            "last_name": "Fiorini",
            "bookmarks": []
        },
        "slug": "somax-co-by-the-reach-team",
        "pk": 4368,
        "published": true,
        "publish_date": "2026-02-16T14:46:58+01:00"
    },
    {
        "title": "\"viral.signal.detector II\" by Felix Römer (Germany)",
        "description": "A generative acousmatic composition, sonifying the dynamics of a simulated epidemic based on the SIRS-model.",
        "content": "<p></p>\r\n<p><em><strong>viral.signal.detector II</strong></em> is a generative acousmatic composition, sonifying the dynamics of a simulated epidemic based on the SIRS-model.&nbsp; Its title is a biological and informational metaphor: signal detection, a term used in both media-theory and epidemiology, refers to identifying information-bearing patterns, such as those that indicate the early spread of a disease. In the 21st century, the term has acquired a multifaceted profundity: today&rsquo;s society is not only endangered by the next potential <em>biological</em> epidemic, but also by the epidemic of (mis-)information spreading virally across the internet. In the proposed composition, the listeners become the signal detecors of such epidemic outbreaks.</p>\r\n<p>The composition's basis is a particle system, where each particle is representing an individual in a society, with red representing &bdquo;infected&ldquo;, green &bdquo;immune&ldquo; and black &bdquo;susceptible for infection&ldquo;. Based on the individuals' probability of infection, recovery and waning immunity, beautiful patterns emerge, which are (a) projected visually into the space and (b) manipulating / spatializing sound. In the first iteration of the project, the three probability-parameters were being controlled in real-time by the composer via a MIDI-controller.&nbsp;</p>\r\n<p>The project&rsquo;s first iteration was premiered in the context of the <em>3D-Audio Art Lab</em> at the <em>Darmst&auml;dter Ferienkurse 2025</em> in a concertante version featuring an array of 54 loudspeakers. The project's second iteration, featured at the 2025 IRCAM Forum Workshops in Taipei, comes as an installation setup for 8 speakers. Here, the epidemic evolving constantly, without a specified beginning or ending. The <strong>audience are invited to participate</strong>: their position and movement in the space will be analysed and used to determine how the pandemic unfolds, both visually and sonically.</p>\r\n<p>More information on the project can be found here: <a href=\"https://www.felix-roemer.com/viral\">https://www.felix-roemer.com/viral</a></p>\r\n<p>More information on the SIRS-model can be found here: <a href=\"https://www.complexity-explorables.org/explorables/critical-hexsirsize/\">https://www.complexity-explorables.org/explorables/critical-hexsirsize/</a></p>",
        "topics": [
            {
                "id": 2736,
                "name": "Forum 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2648,
                "name": "generative audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1707,
                "name": "installation sonore",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27232,
            "forum_user": {
                "id": 27204,
                "user": 27232,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Bildschirmfoto_2023-05-16_um_13.43.10_1.png",
                "avatar_url": "/media/cache/d8/97/d8973a3b6d24849331f08786af566751.jpg",
                "biography": "Felix Römer (*1993) is a Berlin-based composer and pianist, who mainly works in the fields of contemporary music, film music, and improvisation.\n\nHe holds a Bachelor's Degree in Piano\\Jazz from the University of Fine Arts Berlin as well as a Master's Degree in Composition for Screen from Film University Babelsberg KONRAD WOLF. In 2019, he studied with Howard Davidson in the composition department of the Royal College of Music, London. From 2018 to 2019, he studied with Jean-François Zygel in the improvisation department of the Paris Conservatoire (CNSMDP). He took part in numerous masterclasses (with Ensemble Lux:NM, Francesca Verunelli, Helmut Lachenmann, i.a.) and was finalist of several international competitions (e.g. Montreux Jazz Solo Piano Competition 2016). His works have been programmed at numerous festivals, institutions and broadcasting stations, such as IRCAM (Paris), Hamburg Contemporary, Internationale Ferienkurse Darmstadt or Composers Concordance (New York).\n\nHis main musical interests lie in new technologies, the musicality of language, as well as spectral and soundscape-oriented composition, with a particular fascination for pipe organs.",
                "date_modified": "2025-12-03T12:18:30.173017+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 509,
                        "forum_user": 27204,
                        "date_start": "2023-04-14",
                        "date_end": "2024-04-14",
                        "type": 0,
                        "keys": [
                            {
                                "id": 291,
                                "membership": 509
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "fiedert",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "viralsignaldetector-ii-by-felix-romer-germany",
        "pk": 3773,
        "published": true,
        "publish_date": "2025-10-06T12:07:26+02:00"
    },
    {
        "title": "“Lenna” (2019): A 22.2ch sound installation under the Creative Commons license by Miyu Hosoi",
        "description": "Focusing on the orientation and dispersion of sound images, this spatial musi- cal work was made using multiple audio channels, and only the human voice as a sound source. It represents at once an attempt to encourage the creation of multichannel acoustic contents, and the theoretical and practical development of audiovisual environments.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-hors-les-murs-taipei-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><span><img src=\"/media/uploads/miyuhosoi_lenna_ycam02.jpg\" alt=\"\" width=\"698\" height=\"524\" />&nbsp;<img src=\"/media/uploads/miyuhosoi_lenna_ycam01.jpg\" alt=\"\" width=\"939\" height=\"522\" /></span></p>\r\n<p><span>Presented by : Hosoi Miyu</span></p>\r\n<p><a href=\"https://forum.ircam.fr/profile/miyuhosoi/\" target=\"_blank\">Biography</a></p>\r\n<p><span></span></p>\r\n<p><span>Focusing on the orientation and dispersion of sound images, this spatial musical work was made using multiple audio channels, and only the human voice as a sound source. It represents at once an attempt to encourage the creation of multichannel acoustic contents, and the theoretical and practical development of audiovisual environments.</span></p>\r\n<p><span>While sound systems used to be based on such standard formats as mono (1 channel), stereo (2 channels) and surround (5.1 channels), this work adopts the 22.2 channel surround format that was first implemented in the audio production and transmission of NHKʼs 8K Satellite Broadcasting programs. It is, however, a format that the average music creator and listener rarely has a chance to use both as a production and a playback environment. In this exhibition, visitors can experience the work via a system that reproduces its sound stage in a different (2-channel) format.</span></p>\r\n<p><span>Based on the fact that there still exist only few audio samples that are compatible with the 22.2 channel format, the work was made with the &ldquo;conception and implementation of acoustic creation and listening environments&rdquo; in mind. The title was borrowed from the name of a female model whose photo is widely used as a standard test image in the field of image processing. Through the free distribution and Creative Commons licensed secondary use of 22.2 channel&nbsp;</span>sound data, and concrete measures such as experiments with remixing and converting, &ldquo;Lenna&rdquo; aims not only to serve as a sample for multichannel works in the future, but also to inspire endeavors that help stimulate the discussion on environments and distribution of new audiovisual formats.</p>\r\n<div class=\"page\" title=\"Page 3\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>Also I have nother surround sound pieces(in DolbyAtmos and 5.1ch)so I just put in dropbox folder FYI.</span></p>\r\n<p><span></span></p>\r\n<p><span><img src=\"/media/uploads/miyuhosoi_lenna_taiwan.jpg\" alt=\"\" width=\"1435\" height=\"956\" /></span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 87770,
            "forum_user": {
                "id": 87666,
                "user": 87770,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/MiyuHosoi_01_1200.jpg",
                "avatar_url": "/media/cache/15/cc/15cc175678389c17fb6b7a860b12e54b.jpg",
                "biography": "Born in 1993, based in Tokyo, sound artist Miyu HOSOI creates works featuring multiple recordings of her own voice, sound installations using multi-channel sound systems, outdoor installa-tions, performing arts productions, focusing on the way sound transforms the percep-tion of space and situations.\nHer works have been presented at Barbican Centre London, Tokyo International Haneda Airport, Tokyo Metropoli-tan Hibiya Park, Nagano Prefectural Art Museum, Audio Engineering Society[AES], NTT InterCommunication Center[ICC] Anechoic Room, Yamaguchi Center for Arts and Media[YCAM], Aichi Arts Center and more.  In 2024, on stage as a performer at La Biennale di Venezia – Danza 2024, for the theater piece “Tangent” by Shiro Takatani(DUMB TYPE).",
                "date_modified": "2025-11-04T18:05:33.476931+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "miyuhosoi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "lenna-by-hosoi-miyu",
        "pk": 3302,
        "published": true,
        "publish_date": "2025-02-20T15:08:45+01:00"
    },
    {
        "title": "Mouja - Nicola Privato",
        "description": "Dans cette conférence-performance, je présente Thales et Stacco, deux interfaces basées sur des aimants développées au Laboratoire d'instruments intelligents pour la composition et la navigation incarnées et intuitives de l'espace latent dans les modèles de synthèse neuronale.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\">Pr&eacute;sent&eacute; par: Nicolas Privato</p>\r\n<p style=\"text-align: justify;\"><a href=\"https://forum.ircam.fr/profile/nicola-privato-gmail/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">Dans le cadre de ma recherche doctorale, je d&eacute;veloppe une s&eacute;rie d'interfaces magn&eacute;tiques sp&eacute;cialement con&ccedil;ues pour jouer en direct avec RAVE, permettant une navigation intuitive et ludique dans l'espace latent et des cartographies intuitives des relev&eacute;s des capteurs avec les dimensions latentes. Deux de ces interfaces, Thales et Stacco, interagissent activement avec le musicien par le biais de leurs champs magn&eacute;tiques, influen&ccedil;ant ainsi les gestes performatifs. Thales consiste en une paire de contr&ocirc;leurs magn&eacute;tiques se repoussant l'un l'autre et sugg&eacute;rant la pr&eacute;sence d'une interface invisible avec laquelle jouer, elle a &eacute;t&eacute; pr&eacute;sent&eacute;e &agrave; NIME 2023 et est finaliste de la Guthman Competition de cette ann&eacute;e (Georgia Tech). Stacco est au contraire un prototype r&eacute;cent et enti&egrave;rement fonctionnel avec lequel j'ai jou&eacute; ces derniers mois. Il est bas&eacute; sur quatre capteurs/attracteurs qui interagissent avec des sph&egrave;res magn&eacute;tiques &agrave; travers une planche de bois. En d&eacute;pla&ccedil;ant les sph&egrave;res sur la planche, l'interpr&egrave;te contr&ocirc;le la navigation dans l'espace latent ainsi que la position de la source sonore dans l'espace performatif.</p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">Mouia est une exploration performative des limites sonores de ces interfaces contr&ocirc;lant par leurs champs magn&eacute;tiques une s&eacute;rie de mod&egrave;les RAVE form&eacute;s &agrave; l'Intelligent Instruments Lab. Cette performance est &eacute;galement un jeu ludique de reconfigurations de l'IA et de hantologies sonores, o&ugrave; la pr&eacute;sence-absence des spectres sonores de l'ensemble des donn&eacute;es se manifeste &agrave; travers les interactions impr&eacute;visibles des forces magn&eacute;tiques des instruments. Dans Mouja, l'interpr&egrave;te joue avec les instruments &agrave; travers des sph&egrave;res magn&eacute;tiques, les utilisant comme pendule et manipulant une partition cachant des attracteurs magn&eacute;tiques sous le dessin d'un ancien sort islandais. Ces interactions tactiles et ludiques contr&ocirc;lent six mod&egrave;les RAVE superpos&eacute;s, comprenant des voix fantomatiques, des ch&oelig;urs, des orgues &agrave; tuyaux, des sons liquides et des percussions changeantes. Pour accentuer la connexion avec les gestes de l'interpr&egrave;te, Mouja int&egrave;gre l'Ambisonics pour la spatialisation. Mouja a &eacute;t&eacute; pr&eacute;sent&eacute; dans sa forme pr&eacute;liminaire le 21 octobre 2023 &agrave; Fabryka Sztuky (<a href=\"http://www.fabrykasztuki.org/\">http://www.fabrykasztuki.org/</a><span>&lt;wbr /&gt;</span>). La documentation vid&eacute;o et audio de cette performance est disponible sur&nbsp;<a href=\"https://nicolaprivato.com/mouja\">https://nicolaprivato.com/&lt;wbr /&gt;mouja</a><span>.</span></p>\r\n<p style=\"text-align: justify;\"><span></span></p>\r\n<p style=\"text-align: justify;\"><span><img src=\"/media/uploads/112-_fot.marta_zajac-krysiak_full_-_nicola_privato.jpg\" alt=\"\" width=\"205\" height=\"308\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1775,
                "name": "Embodiment",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1774,
                "name": "neural synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 22501,
            "forum_user": {
                "id": 22489,
                "user": 22501,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/112-_fot.Marta_Zajac-Krysiak_web.jpg",
                "avatar_url": "/media/cache/93/eb/93ebe785acb1495a11575270d10c6471.jpg",
                "biography": "I’m a PhD candidate in Cultural Studies, conducting my research at the Intelligent Instruments Lab, in Iceland. Previously, I studied Electronic Music and Composition at the Conservatory of Padua (MA), Jazz Improvisation and Composition at the Conservatory of Trieste (BA) and Linguistics at the University of Padua (BA). Before my current position, I have curated musical events and festivals, performed as a jazz guitar player and taught music practice and theory. My current interests include music composition and performance as they intersect with interface design and new technologies. In particular, in my current projects, I explore new ways of performing and composing with AI and neural synthesis, and through this investigate how the introduction of novel technologies is affecting the sociality of music-making.",
                "date_modified": "2024-09-24T16:10:28.919464+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nicola-privato-gmail",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "mouja",
        "pk": 2728,
        "published": true,
        "publish_date": "2024-02-14T15:43:07+01:00"
    },
    {
        "title": "sex mercado",
        "description": "https://sexo-mercado.com/",
        "content": "<p><a href=\"https://sexo-mercado.com/\"><span style=\"\">https://sexo-mercado.com/</span></a></p>\n<p><a href=\"https://x.com/sexomercado\"><span style=\"\">https://x.com/sexomercado</span></a></p>\n<p><a href=\"https://www.pinterest.com/sexomercado/\"><span style=\"\">https://www.pinterest.com/sexomercado/</span></a></p>\n<p><a href=\"https://www.tumblr.com/sexomercado\"><span style=\"\">https://www.tumblr.com/sexomercado</span></a></p>\n<p><a href=\"https://www.twitch.tv/sexomercado/about\"><span style=\"\">https://www.twitch.tv/sexomercado/about</span></a></p>\n<p><a href=\"https://findaspring.org/members/sexomercado/\"><span style=\"\">https://findaspring.org/members/sexomercado/</span></a></p>\n<p><a href=\"https://pledgeme.co.nz/profiles/326782\"><span style=\"\">https://pledgeme.co.nz/profiles/326782</span></a></p>\n<p><a href=\"https://www.minecraft-servers-list.org/details/sexomercado/\"><span style=\"\">https://www.minecraft-servers-list.org/details/sexomercado/</span></a></p>\n<p><a href=\"https://www.iniuria.us/forum/member.php?668615-sexomercado\"><span style=\"\">https://www.iniuria.us/forum/member.php?668615-sexomercado</span></a></p>\n<p><a href=\"https://b.cari.com.my/home.php?mod=space&amp;uid=3392774&amp;do=profile\"><span style=\"\">https://b.cari.com.my/home.php?mod=space&amp;uid=3392774&amp;do=profile</span></a></p>\n<p><a href=\"https://liulo.fm/sexomercado\"><span style=\"\">https://liulo.fm/sexomercado</span></a></p>\n<p><a href=\"https://wefunder.com/mercadosexo\"><span style=\"\">https://wefunder.com/mercadosexo</span></a></p>\n<p><a href=\"https://golosknig.com/profile/sexomercado/\"><span style=\"\">https://golosknig.com/profile/sexomercado/</span></a></p>\n<p><a href=\"https://doodleordie.com/profile/sexomercado\"><span style=\"\">https://doodleordie.com/profile/sexomercado</span></a></p>\n<p><a href=\"https://jobs.lajobsportal.org/profiles/8096084-mercado-sexo\"><span style=\"\">https://jobs.lajobsportal.org/profiles/8096084-mercado-sexo</span></a></p>\n<p><a href=\"https://replit.com/@sexomercado\"><span style=\"\">https://replit.com/@sexomercado</span></a></p>\n<p><a href=\"https://secondstreet.ru/profile/sexomercado/\"><span style=\"\">https://secondstreet.ru/profile/sexomercado/</span></a></p>\n<p><a href=\"https://nhattao.com/members/sxomercado.6943775/\"><span style=\"\">https://nhattao.com/members/sxomercado.6943775/</span></a></p>\n<p><a href=\"https://community.m5stack.com/user/sexomercado\"><span style=\"\">https://community.m5stack.com/user/sexomercado</span></a></p>\n<p><a href=\"https://jobs.windomnews.com/profiles/8096083-mercado-sexo\"><span style=\"\">https://jobs.windomnews.com/profiles/8096083-mercado-sexo</span></a></p>\n<p><a href=\"https://www.scener.com/@sexomercado\"><span style=\"\">https://www.scener.com/@sexomercado</span></a></p>\n<p><a href=\"https://demo.wowonder.com/1775110679412012_538687\"><span style=\"\">https://demo.wowonder.com/1775110679412012_538687</span></a></p>\n<p><a href=\"https://www.inkitt.com/sexomercado\"><span style=\"\">https://www.inkitt.com/sexomercado</span></a></p>\n<p><a href=\"https://www.floodzonebrewery.com/profile/gaudeopticos57088/profile\"><span style=\"\">https://www.floodzonebrewery.com/profile/gaudeopticos57088/profile</span></a></p>\n<p><a href=\"https://volleypedia.org/index.php?qa=user&amp;qa_1=sexomercado\"><span style=\"\">https://volleypedia.org/index.php?qa=user&amp;qa_1=sexomercado</span></a></p>\n<p><a href=\"https://qiita.com/sexomercado\"><span style=\"\">https://qiita.com/sexomercado</span></a></p>\n<p><a href=\"https://eo-college.org/members/sexomercado/\"><span style=\"\">https://eo-college.org/members/sexomercado/</span></a></p>\n<p><a href=\"https://www.brownbook.net/business/54970581/sexomercado\"><span style=\"\">https://www.brownbook.net/business/54970581/sexomercado</span></a></p>\n<p><a href=\"https://www.plotterusati.it/user/mercado-sexo\"><span style=\"\">https://www.plotterusati.it/user/mercado-sexo</span></a></p>\n<p><a href=\"http://galeria.farvista.net/member.php?action=showprofile&amp;user_id=74927\"><span style=\"\">http://galeria.farvista.net/member.php?action=showprofile&amp;user_id=74927</span></a></p>\n<p><a href=\"https://makeagif.com/user/sexomercado?ref=wz9ltO\"><span style=\"\">https://makeagif.com/user/sexomercado?ref=wz9ltO</span></a></p>\n<p><a href=\"https://www.fitlynk.com/42192a14a\"><span style=\"\">https://www.fitlynk.com/42192a14a</span></a></p>\n<p><a href=\"https://poipiku.com/13403988/\"><span style=\"\">https://poipiku.com/13403988/</span></a></p>\n<p><a href=\"https://www.vid419.com/home.php?mod=space&amp;uid=3482492\"><span style=\"\">https://www.vid419.com/home.php?mod=space&amp;uid=3482492</span></a></p>\n<p><a href=\"https://www.play56.net/home.php?mod=space&amp;uid=6091541\"><span style=\"\">https://www.play56.net/home.php?mod=space&amp;uid=6091541</span></a></p>\n<p><a href=\"https://lamsn.com/home.php?mod=space&amp;uid=1921714\"><span style=\"\">https://lamsn.com/home.php?mod=space&amp;uid=1921714</span></a></p>\n<p><a href=\"https://www.circleme.com/sexomercado\"><span style=\"\">https://www.circleme.com/sexomercado</span></a></p>\n<p><a href=\"https://protocol.ooo/en/users/mercado-sexo\"><span style=\"\">https://protocol.ooo/en/users/mercado-sexo</span></a></p>\n<p><a href=\"https://truckymods.io/user/478253\"><span style=\"\">https://truckymods.io/user/478253</span></a></p>\n<p><a href=\"https://www.slmath.org/people/103168\"><span style=\"\">https://www.slmath.org/people/103168</span></a></p>\n<p><a href=\"https://cannabis.net/user/220754\"><span style=\"\">https://cannabis.net/user/220754</span></a></p>\n<p><a href=\"https://mygamedb.com/profile/sexomercado\"><span style=\"\">https://mygamedb.com/profile/sexomercado</span></a></p>\n<p><a href=\"https://odesli.co/cpwj4qgv94hrp\"><span style=\"\">https://odesli.co/cpwj4qgv94hrp</span></a></p>\n<p><a href=\"https://www.claimajob.com/profiles/8096202-mercado-sexo\"><span style=\"\">https://www.claimajob.com/profiles/8096202-mercado-sexo</span></a></p>\n<p><a href=\"https://www.facekindle.com/sexomercado\"><span style=\"\">https://www.facekindle.com/sexomercado</span></a></p>\n<p><a href=\"http://www.askmap.net/location/7779655/vietnam/mercado-sexo\"><span style=\"\">http://www.askmap.net/location/7779655/vietnam/mercado-sexo</span></a></p>\n<p><a href=\"https://fanclove.jp/profile/wyWevAxrB0\"><span style=\"\">https://fanclove.jp/profile/wyWevAxrB0</span></a></p>\n<p><a href=\"https://pumpyoursound.com/u/user/1597841\"><span style=\"\">https://pumpyoursound.com/u/user/1597841</span></a></p>\n<p><a href=\"https://uiverse.io/profile/sexomercad_4375\"><span style=\"\">https://uiverse.io/profile/sexomercad_4375</span></a></p>\n<p><a href=\"https://camp-fire.jp/profile/sexomercado\"><span style=\"\">https://camp-fire.jp/profile/sexomercado</span></a></p>\n<p><a href=\"https://www.dailymotion.com/user/sexomercado\"><span style=\"\">https://www.dailymotion.com/user/sexomercado</span></a></p>\n<p><a href=\"https://edabit.com/user/v9TZsjy8ByxGojc9H\"><span style=\"\">https://edabit.com/user/v9TZsjy8ByxGojc9H</span></a></p>\n<p><a href=\"https://mecabricks.com/en/user/sexomercado\"><span style=\"\">https://mecabricks.com/en/user/sexomercado</span></a></p>\n<p><a href=\"https://backloggery.com/sexomercado\"><span style=\"\">https://backloggery.com/sexomercado</span></a></p>\n<p><a href=\"https://awan.pro/forum/user/157579/\"><span style=\"\">https://awan.pro/forum/user/157579/</span></a></p>\n<p><a href=\"https://sexomercado.blogpayz.com/profile\"><span style=\"\">https://sexomercado.blogpayz.com/profile</span></a></p>\n<p><a href=\"https://idol.st/user/154546/sexomercado/\"><span style=\"\">https://idol.st/user/154546/sexomercado/</span></a></p>\n<p><a href=\"https://selficlub.com/sexomercado\"><span style=\"\">https://selficlub.com/sexomercado</span></a></p>\n<p><a href=\"https://bandori.party/user/710561/sexomercado/\"><span style=\"\">https://bandori.party/user/710561/sexomercado/</span></a></p>\n<p><a href=\"https://coinfolk.net/user/sexomercado\"><span style=\"\">https://coinfolk.net/user/sexomercado</span></a></p>\n<p><a href=\"https://tabbles.net/users/sexomercado/\"><span style=\"\">https://tabbles.net/users/sexomercado/</span></a></p>\n<p><a href=\"https://anotepad.com/note/read/6hqsjr5e\"><span style=\"\">https://anotepad.com/note/read/6hqsjr5e</span></a></p>\n<p><a href=\"https://pastebin.com/u/sexomercado\"><span style=\"\">https://pastebin.com/u/sexomercado</span></a></p>\n<p><a href=\"https://paper.wf/sexomercado/sexomercado\"><span style=\"\">https://paper.wf/sexomercado/sexomercado</span></a></p>\n<p><a href=\"https://wall.page/P4s6dC\"><span style=\"\">https://wall.page/P4s6dC</span></a></p>\n<p><a href=\"https://www.komoot.com/user/5623654963515\"><span style=\"\">https://www.komoot.com/user/5623654963515</span></a></p>\n<p><a href=\"https://www.haikudeck.com/presentations/OStg1TmI31\"><span style=\"\">https://www.haikudeck.com/presentations/OStg1TmI31</span></a></p>\n<p><a href=\"https://sexomercado.elbloglibre.com/profile\"><span style=\"\">https://sexomercado.elbloglibre.com/profile</span></a></p>\n<p><a href=\"https://rant.li/sexomercado/sexomercado\"><span style=\"\">https://rant.li/sexomercado/sexomercado</span></a></p>\n<p><a href=\"https://quicknote.io/d4d457a0-2e60-11f1-be05-636f7ba8c4c7/\"><span style=\"\">https://quicknote.io/d4d457a0-2e60-11f1-be05-636f7ba8c4c7/</span></a></p>\n<p><a href=\"https://www.syncdocs.com/forums/profile/sexomercado\"><span style=\"\">https://www.syncdocs.com/forums/profile/sexomercado</span></a></p>\n<p><a href=\"https://fabble.cc/mercadosexo\"><span style=\"\">https://fabble.cc/mercadosexo</span></a></p>\n<p><a href=\"https://partecipa.poliste.com/profiles/sexomercado/activity\"><span style=\"\">https://partecipa.poliste.com/profiles/sexomercado/activity</span></a></p>\n<p><a href=\"https://electroswingthing.com/profile/\"><span style=\"\">https://electroswingthing.com/profile/</span></a></p>\n<p><a href=\"https://securityheaders.com/?q=https%3A%2F%2Fsexo-mercado.com%2F\"><span style=\"\">https://securityheaders.com/?q=https%3A%2F%2Fsexo-mercado.com%2F</span></a></p>\n<p><a href=\"https://listium.com/@sexomercado\"><span style=\"\">https://listium.com/@sexomercado</span></a></p>\n<p><a href=\"https://www.flyingv.cc/users/1447603\"><span style=\"\">https://www.flyingv.cc/users/1447603</span></a></p>\n<p><a href=\"https://scanverify.com/siteverify.php?site=https://sexo-mercado.com/\"><span style=\"\">https://scanverify.com/siteverify.php?site=https://sexo-mercado.com/</span></a></p>\n<p><a href=\"https://openwhyd.org/u/69ce145438a416710f960ecd\"><span style=\"\">https://openwhyd.org/u/69ce145438a416710f960ecd</span></a></p>\n<p><a href=\"https://www.harimajuku.com/profile/gaudeopticos40533/profile\"><span style=\"\">https://www.harimajuku.com/profile/gaudeopticos40533/profile</span></a></p>\n<p><a href=\"https://akniga.org/profile/1407517-sexomercado/\"><span style=\"\">https://akniga.org/profile/1407517-sexomercado/</span></a></p>\n<p><a href=\"https://www.shippingexplorer.net/en/user/sexomercado/271338\"><span style=\"\">https://www.shippingexplorer.net/en/user/sexomercado/271338</span></a></p>\n<p><a href=\"https://photozou.jp/user/top/3447283\"><span style=\"\">https://photozou.jp/user/top/3447283</span></a></p>\n<p><a href=\"https://fileforums.com/member.php?u=297584\"><span style=\"\">https://fileforums.com/member.php?u=297584</span></a></p>\n<p><a href=\"https://app.readthedocs.org/profiles/sexomercado/\"><span style=\"\">https://app.readthedocs.org/profiles/sexomercado/</span></a></p>\n<p><a href=\"https://taittsuu.com/users/sexomercado\"><span style=\"\">https://taittsuu.com/users/sexomercado</span></a></p>\n<p><a href=\"https://savee.com/sexomercado/\"><span style=\"\">https://savee.com/sexomercado/</span></a></p>\n<p><a href=\"https://code.antopie.org/sexomercado\"><span style=\"\">https://code.antopie.org/sexomercado</span></a></p>\n<p><a href=\"https://pxhere.com/en/photographer/4966852\"><span style=\"\">https://pxhere.com/en/photographer/4966852</span></a></p>\n<p><a href=\"https://gesoten.com/profile/detail/12691047\"><span style=\"\">https://gesoten.com/profile/detail/12691047</span></a></p>\n<p><a href=\"https://connect.gt/user/sexomercado\"><span style=\"\">https://connect.gt/user/sexomercado</span></a></p>\n<p><a href=\"https://participa.aytojaen.es/profiles/sexomercado/activity\"><span style=\"\">https://participa.aytojaen.es/profiles/sexomercado/activity</span></a></p>\n<p><a href=\"https://participation.bordeaux.fr/profiles/sexomercado/activity\"><span style=\"\">https://participation.bordeaux.fr/profiles/sexomercado/activity</span></a></p>\n<p><a href=\"https://participer.valdemarne.fr/profiles/sexomercado/activity\"><span style=\"\">https://participer.valdemarne.fr/profiles/sexomercado/activity</span></a></p>\n<p><a href=\"https://entre-vos-mains.alsace.eu/profiles/sexomercado/activity\"><span style=\"\">https://entre-vos-mains.alsace.eu/profiles/sexomercado/activity</span></a></p>\n<p><a href=\"https://jobs.siliconflorist.com/employers/4090608-sexomercado\"><span style=\"\">https://jobs.siliconflorist.com/employers/4090608-sexomercado</span></a></p>\n<p><a href=\"https://letterboxd.com/sexomercado/\"><span style=\"\">https://letterboxd.com/sexomercado/</span></a></p>\n<p><a href=\"https://routinehub.co/user/sexomercado\"><span style=\"\">https://routinehub.co/user/sexomercado</span></a></p>\n<p><a href=\"https://zimexapp.co.zw/sexomercado\"><span style=\"\">https://zimexapp.co.zw/sexomercado</span></a></p>\n<p><a href=\"https://cointr.ee/sexomercado\"><span style=\"\">https://cointr.ee/sexomercado</span></a></p>\n<p><a href=\"https://zrzutka.pl/profile/sexomercado-719934\"><span style=\"\">https://zrzutka.pl/profile/sexomercado-719934</span></a></p>\n<p><a href=\"https://civitai.com/user/sexomercado\"><span style=\"\">https://civitai.com/user/sexomercado</span></a></p>\n<p><a href=\"https://rotorbuilds.com/profile/209927/\"><span style=\"\">https://rotorbuilds.com/profile/209927/</span></a></p>\n<p><a href=\"https://pixelfed.uno/sexomercado\"><span style=\"\">https://pixelfed.uno/sexomercado</span></a></p>\n<p><a href=\"https://findpenguins.com/sexomercado\"><span style=\"\">https://findpenguins.com/sexomercado</span></a></p>\n<p><a href=\"https://3dlancer.net/profile/u1141719\"><span style=\"\">https://3dlancer.net/profile/u1141719</span></a></p>\n<p><a href=\"https://www.jointcorners.com/sexomercado\"><span style=\"\">https://www.jointcorners.com/sexomercado</span></a></p>\n<p><a href=\"https://naijamatta.com/sexomercado\"><span style=\"\">https://naijamatta.com/sexomercado</span></a></p>\n<p><a href=\"https://www.elephantjournal.com/profile/sexomercado/\"><span style=\"\">https://www.elephantjournal.com/profile/sexomercado/</span></a></p>\n<p><a href=\"https://www.beamng.com/members/sexomercado.783743/\"><span style=\"\">https://www.beamng.com/members/sexomercado.783743/</span></a></p>\n<p><a href=\"https://medibang.com/author/28085392/\"><span style=\"\">https://medibang.com/author/28085392/</span></a></p>\n<p><a href=\"https://audio.com/sexomercado\"><span style=\"\">https://audio.com/sexomercado</span></a></p>\n<p><a href=\"https://cinderella.pro/user/270104/sexomercado/\"><span style=\"\">https://cinderella.pro/user/270104/sexomercado/</span></a></p>\n<p><a href=\"https://forums.maxperformanceinc.com/forums/member.php?u=244028\"><span style=\"\">https://forums.maxperformanceinc.com/forums/member.php?u=244028</span></a></p>\n<p><a href=\"https://forum.aigato.vn/user/sexomercado\"><span style=\"\">https://forum.aigato.vn/user/sexomercado</span></a></p>\n<p><a href=\"http://www.genina.com/user/editDone/5256096.page\"><span style=\"\">http://www.genina.com/user/editDone/5256096.page</span></a></p>\n<p><a href=\"https://malt-orden.info/userinfo.php?uid=453994\"><span style=\"\">https://malt-orden.info/userinfo.php?uid=453994</span></a></p>\n<p><a href=\"https://www.iglinks.io/GaudeOpticos-z17?preview=true\"><span style=\"\">https://www.iglinks.io/GaudeOpticos-z17?preview=true</span></a></p>\n<p><span style=\"\">bio.site/sexomercado</span></p>\n<p><a href=\"https://heylink.me/gaudeopticos/\"><span style=\"\">https://heylink.me/gaudeopticos/</span></a></p>\n<p><a href=\"https://www.hostboard.com/forums/members/sexomercado.html\"><span style=\"\">https://www.hostboard.com/forums/members/sexomercado.html</span></a></p>\n<p><a href=\"https://infiniteabundance.mn.co/members/39106759\"><span style=\"\">https://infiniteabundance.mn.co/members/39106759</span></a></p>\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 166435,
            "forum_user": {
                "id": 166198,
                "user": 166435,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6a3fa8f3daaf7c427df256d72f0402ba?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-02T16:10:43.183354+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sexomercado",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sex-mercado",
        "pk": 4583,
        "published": false,
        "publish_date": "2026-04-02T16:15:56.969108+02:00"
    },
    {
        "title": "SAT Essay - 4 Tips to a Great SAT Essay Score",
        "description": "Many students experience some anxiety when the time arrives to take the SATs. After all, the SATs are an essential exam that many institutes of higher education consider during the admissions process. ",
        "content": "<p>Many students experience some anxiety when the time arrives to take the SATs. After all, the SATs are an essential exam that many institutes of higher education consider during the admissions process. In addition, essays can be especially difficult for students. However, proper preparation can lead students to success. If you &ldquo;<a href=\"https://tophomeworkhelper.com/write-my-essay.html\"><strong>&lt;u&gt;write my essay&lt;/u&gt;</strong></a><strong>&rdquo;</strong>&nbsp;correctly, you will grab graders' attention and earn an excellent score.</p>\n<p><strong>Two different graders grade</strong><strong>&nbsp;SAT essays</strong></p>\n<p>It will be of great SAT essay help if your know-how essays are graded. Each grader has six points to distribute, which allows for a combined high score of twelve. Essay graders will be grading your paper's content, organization, clarity, and be looking to see that you followed directions as indicated in the prompt. Graders will be judging each of these specific areas but will also be focusing on your essay as a whole.</p>\n<p><strong>Comprehend the essay prompt</strong></p>\n<p>The first step to &ldquo;<a href=\"https://tophomeworkhelper.com/write-my-essay-for-cheap.html\"><strong>&lt;u&gt;write my&lt;/u&gt;</strong>&lt;u&gt;&nbsp;&lt;/u&gt;<strong>&lt;u&gt;essay for cheap&lt;/u&gt;</strong></a><strong>&rdquo;</strong>&nbsp;is comprehending what the prompt asks. Make sure to fully understand the prompt before you begin writing and focus on the main idea. This will set the groundwork for a solid essay and allow you to start writing a high scoring essay. Students often misunderstand the prompt because they did not read it carefully enough to receive lower scores. Avoid this simple mistake by reading the prompt more than once.</p>\n<p><strong>Essays must include thesis statements</strong></p>\n<p>A solid thesis statement is the beginning of a successful essay. The thesis statement will answer the question that the prompt is asking and give the grader an idea of the direction of the essay. It can be helpful to include some of the wording of the prompt in the thesis statement. The thesis statement will be included in the introductory paragraph of your essay and a basic summary of the main ideas that will be discussed throughout your essay.</p>\n<p><strong>Maintain the essay length</strong></p>\n<p>The length of your essay should be at least four hundred words. Research completed by MIT indicated that<strong>&nbsp;</strong><a href=\"https://tophomeworkhelper.com/essay-help.html\"><strong>&lt;u&gt;essay &lt;/u&gt;</strong><strong>&lt;u&gt;help&lt;/u&gt;</strong></a><strong>&nbsp;</strong>who wrote at least four hundred words of essays received higher scores about ninety percent of the time. The body of your essay should be composed of at least three to four strong paragraphs that support your thesis statement.</p>\n<p>Each section should include an introductory and concluding sentence. Include academic examples from history or literature while avoiding personal stories unless specifically asked to prove your ideas. Essays with personal examples do not score as well as those with clear academic standards. The final paragraph of your essay needs to summarise and conclude the essay. Anecdotes or analogies, often found in conclusions of the highest-scoring SAT essays, are an excellent way to end an essay.</p>\n<p>Writing a well-written essay that receives a high score on the SAT is entirely possible. Correctly following the prompt, avoiding vague ideas and writing at least four hundred words will put you on the right path toward achieving a perfect score. Try not to become too consumed with one specific guideline, but focus on the essay as a whole, and you will see positive results!</p>\n<p>For More Related Services: <a href=\"https://tophomeworkhelper.com/do-my-math-homework.html\">&lt;u&gt;Do My Math Homework&lt;/u&gt;</a>, <a href=\"https://tophomeworkhelper.com/myassignmenthelp-reviews.html\">&lt;u&gt;Myassignmenthelp Reviews&lt;/u&gt;</a>, <a href=\"https://tophomeworkhelper.com/do-my-coursework.html\">&lt;u&gt;do my coursework&lt;/u&gt;</a>, <a href=\"https://tophomeworkhelper.com/coursework-help.html\">&lt;u&gt;Coursework Help&lt;/u&gt;</a>, <a href=\"https://tophomeworkhelper.com/plagiarism-free-essays.html\">&lt;u&gt;Plagiarism Free Essays&lt;/u&gt;</a>, <a href=\"https://tophomeworkhelper.com/make-my-assignment.html\">&lt;u&gt;Make My Assignment&lt;/u&gt;</a>, <a href=\"https://tophomeworkhelper.com/dissertation-help.html\">&lt;u&gt;Dissertation Help&lt;/u&gt;</a></p>",
        "topics": [],
        "user": {
            "pk": 28717,
            "forum_user": {
                "id": 28689,
                "user": 28717,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/38f57d63c31169e080b1bdd1fe9e4500?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "mikejohnson",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sat-essay-4-tips-to-a-great-sat-essay-score",
        "pk": 1147,
        "published": false,
        "publish_date": "2022-04-27T09:10:41.661687+02:00"
    },
    {
        "title": "Robosonic Play by Elias Naphausen",
        "description": "In this talk we will dive into non-anthropomorphic robotics and sonification. By turning data from an industrial manipulator into sound, we can listen to motion paths, velocities, forces and much more. We discuss existing approaches, previous research and play with a mapping setup.",
        "content": "<p><img alt=\"Robosonic Play\" src=\"https://forum.ircam.fr/media/uploads/user/7107fb8f05ca11aa09c2839ca8532725.jpg\" /></p>\r\n<p><sub>Image by Olga Toltinova</sub></p>\r\n<p><a href=\"http://robosonic.de/resources/robosonicplay_1.mp4\">Video</a></p>\r\n<p>What are the design potentials of sonification when it comes to human-robot-interaction? Can we change how we encounter robots by sounddesign? Can we listen to their data, their knowledge, and how would this change our interaction with them? In my practice-led research project (<a href=\"https://www.uni-weimar.de/de/universitaet/start/\">Bauhaus-University Weimar</a>/D-LAB &amp; <a href=\"https://www.tha.de/Gestaltung.html\">Technical University of Applied Sciences Augsburg</a>/<a href=\"https://hybridthings.tha.de/\">Hybrid Things Lab</a>) we focus on sonified sounds for non-anthropomorphic robots (e.g. industrial manipulators) and research human-robot-interaction in interactive demonstrators, explorations and playful experiences.</p>\r\n<p>In Robosonic Play we continue the promising results of our previous work (<a href=\"https://doi.org/10.1145/3611646\">https://doi.org/10.1145/3611646</a>) in nonlinear human-robot-material interaction scenarios, by incorporating the performative aspects of human-robot collaboration into our further research. We focus on moments of vagueness, autonomy, embodied interaction and the machine's data space as material for augmented sonic presence. By turning the inside of the machine (the information in form of data) out (in form of sound), we create an additional layer of information, which can help human collaborators to better understand their computational counterparts: Their autonomous actions (e.g. movement data) and their representation of the world (e.g. sensory data).</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/21e0029a1d4f19e2c153c9467c7b3a4e.jpg\" /></p>\r\n<p><sub>Image by Timo Holzmann</sub></p>\r\n<p>Together we will listen to a moving robot, create mappings between sound and machine, and tweak some knobs on a synth.&nbsp;</p>",
        "topics": [
            {
                "id": 2345,
                "name": "auditory display",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2344,
                "name": "human-robot-interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2343,
                "name": "robotics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 703,
                "name": "Sonification",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 85885,
            "forum_user": {
                "id": 85783,
                "user": 85885,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/highres-lndw-7_square.jpg",
                "avatar_url": "/media/cache/31/0e/310eef95fb1adbc1284a1eb13a21b177.jpg",
                "biography": "Research associate, artist and PhD student located in Augsburg/Germany.\nResearching the sound of robotics by a data sonification approach.",
                "date_modified": "2024-12-02T14:47:06.090338+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 983,
                        "forum_user": 85783,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [
                            {
                                "id": 628,
                                "membership": 983
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "boinappi",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3009,
                    "user": 85885,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 127,
                    "user": 85885,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3066,
                    "user": 85885,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "robosonic-play",
        "pk": 3066,
        "published": true,
        "publish_date": "2024-10-24T11:06:35+02:00"
    },
    {
        "title": "HYPNOS",
        "description": "Hypnos and Thanatos, Sleep and Death. Death mirrors Sleep, because it is the latter that interacts with life; it is life itself, while Death represents its mirror opposite: life is mirrored in Death. Now Hypnos is introduced ... Thanatos can wait.\r\n\r\nStarting from the sounds recorded on the lakeside of the town where I was born, I imagined what the soundscape will be in the future.",
        "content": "<p><img alt=\"Domenico DE SIMONE - HYPNOS\" src=\"/media/uploads/user/a9dcfce62255a3947be63d7edc24fa71.png\" />Hypnos and Thanatos, Sleep and Death. Death mirrors Sleep, because it is the latter that interacts with life; it is life itself, while Death represents its mirror opposite: life is mirrored in Death. Now Hypnos is introduced ... Thanatos can wait.</p>\r\n<p>Starting from the sounds recorded on the lakeside of the town where I was born, I imagined what the soundscape will be in the future.</p>",
        "topics": [],
        "user": {
            "pk": 17764,
            "forum_user": {
                "id": 17759,
                "user": 17764,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Domenico_DE_SIMONE_-_Foto.jpg",
                "avatar_url": "/media/cache/aa/73/aa73d7c6e614f96d16d5dc4f3191464c.jpg",
                "biography": "Professor of Electroacoustic Composition at the \"Umberto Giordano\" Music Conservatory of Foggia. Graduated in Piano, Jazz, Composition and Electronic Music.\nHe also graduated in Composition advanced course at the Accademia Nazionale of Santa Cecilia under the guidance of Azio Corghi and in Electronic Music - 2nd academic level, with the highest marks and honors, at the Conservatory of Santa Cecilia under the guidance of Giorgio Nottoli. He enhanced his knowledge by attending the Accademia Chigiana in Siena, where he was awarded with the diploma of merit in Music for Film by Ennio Morricone and the diploma of merit in Composition by Franco Donatoni.\nIn 1995, 1996 and 1997 he was awarded by the S.I.A.E.\nHis compositions have been performed in more than one hundred concerts in Italy and abroad (China, Latvia, Canada, Chile, Argentina, Romania, Malta, USA, Ireland, UK, Spain, Austria, Brazil, etc.) and broadcasted by RADIOTRE.",
                "date_modified": "2024-11-17T11:52:53.272556+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "DDS",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "hypnos",
        "pk": 1278,
        "published": true,
        "publish_date": "2022-09-05T15:42:24+02:00"
    },
    {
        "title": "“Reciter(s)” by Po-Hao Chi (C-Lab, Taiwan)",
        "description": "Reciter(s) is a browser-based sound system that turns mobile devices into a dispersed polyphonic speech ensemble. The work explores collective listening and the aesthetics of desynchronization through web-native tools.",
        "content": "<p><strong></strong>Reciter(s) is a browser-based sound system that transforms mobile devices into a dispersed polyphonic speech ensemble. A central framework&mdash;developed with Max/MSP, a lightweight server, and a web front-end&mdash;distributes algorithmically recomposed text fragments to each connected browser in real time, triggering native speech synthesis across dozens of phones, tablets, or laptops.</p>\r\n<p>The piece embraces timing differences and hardware diversity, allowing latency, voice variation, and desynchronization to become performative. Rather than correcting these deviations, the system foregrounds them as sonic and spatial phenomena.</p>\r\n<p>Reciter(s) will be presented at the Diversonics Festival at C-LAB in Taipei (Oct &ndash; Nov 2025). Visitors can experience the work simply by opening a webpage on their own devices and joining the distributed recitation network&mdash;no app or installation required.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4429bd9421151a1f4cd2a1d677df5856.jpg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/84b2a212e0828c136b8aeb72c217e0ab.jpg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9f53216c0a03c504f342f680179dbaaa.jpg\" /></p>",
        "topics": [
            {
                "id": 3539,
                "name": "browser-based sound, participatory art, speech synthesis, Max/MSP, network performance, generative systems",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2271,
            "forum_user": {
                "id": 2269,
                "user": 2271,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/efbc2b2e70d5149eae1b63d9ce64b95f?s=120&d=retro",
                "biography": "Po-Hao CHI is an interdisciplinary practitioner from Taiwan who works at the intersection of art, music, and technology. His practice often arises from a fascination with boundaries and guidelines, connecting diversity in daily life — from conceptual to virtual art, software to hardware, and performance to installation. His recent research focuses on agency and the collaborative capacities between humans and artefacts with evolving connectivity. Chi graduated from the MIT Art, Culture, and Technology programme, earned his MMus from Goldsmiths College, and obtained a B.A. in Economics from National Taiwan University.\n\nCHI's works frequently employ sonification approaches to design interactive systems, exploring \"more than human\" issues through technological artefacts. His international residencies include V2 (Netherlands), Laboral (Spain), FACT (U.K.), and Medialab Prado (Spain). He was also awarded the Harold and Arlene Schnitzer Prize in Visual Arts at MIT. Since 2016, he has also participated in theatre productions as a sound designer and composer, with commissions from Macau Art Center, National Kaohsiung Center for the Arts, Taipei Chinese Orchestra, Ju Percussion Group, and o",
                "date_modified": "2026-02-23T17:09:41.521400+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "stu84096",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "reciters-by-po-hao-chi-taiwan",
        "pk": 3850,
        "published": true,
        "publish_date": "2025-10-13T11:36:37+02:00"
    },
    {
        "title": "GRID - An Immersive Simulation of Anthropocene Forces",
        "description": "GRID presents an immersive environment where natural force fields are synthesized into both visual and acoustic forms.",
        "content": "<p class=\"gmail-ds-markdown-paragraph\"><i><span>GRID presents an immersive environment where natural force fields are synthesized into both visual and acoustic forms. Departing from photorealism, the aesthetic employs abstract, simulation-based compositions to evoke the power of natural systems.</span></i></p>\r\n<p class=\"gmail-ds-markdown-paragraph\"><i><span>The work develops a gestural language of high expressivity, consciously avoiding the nostalgia of neo-romanticism. Here, meaning is conveyed through the dynamics of movement and force itself. The human subject is repositioned not as a center of emotional experience, but as an entity acted upon by immense, impersonal powers. This reflects a critical perspective on the role of human and non-human actors within the Anthropocene.</span></i></p>\r\n<p class=\"gmail-ds-markdown-paragraph\"><i><span>The simulations do not illustrate water or air, but instead articulate the raw, underlying forces, creating an art of pure expression untethered from a fixed subject or conventional representation.</span></i></p>\r\n<p class=\"gmail-ds-markdown-paragraph\"><i><span>&nbsp;</span></i></p>\r\n<p class=\"gmail-ds-markdown-paragraph\"><i><span><img src=\"https://forum.ircam.fr/media/uploads/images/sample_2.png\" alt=\"\" width=\"823\" height=\"463\" /></span></i></p>\r\n<p class=\"gmail-ds-markdown-paragraph\"><img src=\"https://forum.ircam.fr/media/uploads/grid001_1.1.5.jpg\" alt=\"\" width=\"821\" height=\"462\" /></p>",
        "topics": [
            {
                "id": 1826,
                "name": " audiovisual",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2736,
                "name": "Forum 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 13668,
            "forum_user": {
                "id": 13665,
                "user": 13668,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/2eedbea8d01eaadef1edc003a36b49a4?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-12T18:33:57.842596+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "PHcomposer",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "grid-an-immersive-simulation-of-anthropocene-forces",
        "pk": 3964,
        "published": true,
        "publish_date": "2025-11-03T18:28:17+01:00"
    },
    {
        "title": "STRATEGIES ET OUTILS POUR LA SONIFICATION DE DONNÉES PROSODIQUES : POINT DE VUE D'UN COMPOSITEUR",
        "description": "Y a-t-il un sens à la sonification des informations qui se réfèrent à un phénomène déjà audible, comme les données prosodiques ?",
        "content": "<p>Est-il judicieux de sonifier des informations qui se rapportent &agrave; un ph&eacute;nom&egrave;ne d&eacute;j&agrave; audible, comme les donn&eacute;es prosodiques ? Pour &ecirc;tre utile, une sonification de la prosodie doit contribuer &agrave; la compr&eacute;hension de caract&eacute;ristiques paralinguistiques qui, autrement, ne retiendraient pas l'attention de l'auditeur. Dans ce contexte, l'article illustre un cadre modulaire et flexible pour la r&eacute;duction et le traitement des donn&eacute;es prosodiques &agrave; utiliser pour am&eacute;liorer la perception de l'intention, de l'attitude et des &eacute;motions du locuteur. Le mod&egrave;le utilise la parole comme entr&eacute;e et fournit des donn&eacute;es MIDI et MusicXML comme sortie, permettant aux &eacute;chantillonneurs et aux logiciels de notation d'audiod&eacute;crire et d'afficher les informations. L'architecture d&eacute;crite a &eacute;t&eacute; test&eacute;e subjectivement par l'auteur sur une p&eacute;riode de plusieurs ann&eacute;es en composant pour des instruments solistes, des ensembles et des orchestres. Deux r&eacute;sultats de la recherche sont discut&eacute;s : les avantages d'une strat&eacute;gie adaptative pour la r&eacute;duction des donn&eacute;es, et l'affichage auditif des structures temporelles et de hauteur profondes qui sous-tendent le traitement prosodique.</p>\r\n<p></p>\r\n<p><a href=\"https://icad2021.icad.org/wp-content/uploads/2021/06/ICAD_2021_41.pdf\">STRATEGIES ET OUTILS POUR LA SONIFICATION DE DONN&Eacute;ES PROSODIQUES : POINT DE VUE D'UN COMPOSITEUR</a></p>",
        "topics": [
            {
                "id": 571,
                "name": "Prosody",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 777,
            "forum_user": {
                "id": 777,
                "user": 777,
                "first_name": "Fabio",
                "last_name": "Cifariello Ciardi",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b4d85d0aa03337677e97084a18abe800?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-01-12T12:46:05.083432+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "FabioCC",
            "first_name": "Fabio",
            "last_name": "Cifariello Ciardi",
            "bookmarks": []
        },
        "slug": "strategies-and-tools-for-the-sonification-of-prosodic-data-a-composers-perspective",
        "pk": 2694,
        "published": true,
        "publish_date": "2024-02-07T16:31:03+01:00"
    },
    {
        "title": "Throbbing Sonic - C-LAB",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris.",
        "content": "<p class=\"s3\"><span class=\"s4\">&ldquo;</span><span class=\"s4\">The art works are like a medium to direct us into the spirituality, which is like a constant flow of consciousness, going through the inner and exterior space of the audiences&rsquo; bodies, washing out the purest part in the consciousness. The audiences get to experience a primal spiritual state of the self, </span><span class=\"s4\">so as to</span><span class=\"s4\"> activate, and to inspire other unique experiences.</span><span class=\"s4\">&rdquo;</span><span class=\"s4\"> </span><span class=\"s4\">─</span><span class=\"s4\"> </span><span class=\"s4\">Fujui</span><span class=\"s4\"> Wang</span></p>\r\n<p class=\"s3\"><span class=\"s4\"></span></p>\r\n<p class=\"s3\"><span class=\"s4\"><img src=\"/media/uploads/fujui_wang_photo.jpg\" alt=\"\" width=\"1024\" height=\"721\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p class=\"s3\"><span class=\"s4\"></span></p>\r\n<p class=\"s3\"><strong><span class=\"s7\">I</span><span class=\"s7\">ntroduction</span></strong></p>\r\n<p class=\"s3\"><span class=\"s4\">T</span><span class=\"s4\">he creation of </span><span class=\"s8\">Throbbing Sonic </span><span class=\"s4\">comes from the hybrid modulation of sounds correlated with the process of image formation. Through the constant interaction between sound and vision, the sensory nervous system is stimulated. The in</span><span class=\"s4\">t</span><span class=\"s4\">er-uncertainty between human influence, media, and machine arithmetic </span><span class=\"s4\">play</span><span class=\"s4\">s</span><span class=\"s4\"> a part. C-</span><span class=\"s4\">LAB Taiwan Sound Lab utilizes the 3D sound mixing function in Spat stereoscopic system, together with a complete stereophony library, to control the source of sounds in physical space corresponding to the visual 3D environment. It creates improvised information offset in video glitch. The images perform in the states of instant movement, generation, formation, </span><span class=\"s4\">transformation</span><span class=\"s4\"> and decomposition, rendering unpredictability and uncertainty.</span></p>\r\n<p class=\"s6\"><span></span></p>\r\n<p class=\"s6\"><span><img src=\"/media/uploads/throbbing_sonic_02_photo_&copy;_et@t.jpg\" alt=\"\" width=\"1552\" height=\"873\" />&nbsp;</span></p>\r\n<p class=\"s6\"><span>&nbsp;</span></p>\r\n<p class=\"s3\"><strong><span class=\"s7\">Organizer</span></strong></p>\r\n<p class=\"s3\"><span class=\"s4\">ET@T, </span><span class=\"s4\">T</span><span class=\"s4\">aiwan Contemporary Culture Lab (C-LAB)</span><span class=\"s4\"> and </span><span class=\"s4\">Fujui</span><span class=\"s4\"> WANG</span></p>\r\n<p class=\"s3\"><strong><span class=\"s7\"></span></strong></p>\r\n<p class=\"s3\"><strong><span class=\"s7\">C</span><span class=\"s7\">reation Team</span></strong></p>\r\n<p class=\"s3\"><span class=\"s4\">Images and Sound Provided by </span><span class=\"s4\">Fujui</span><span class=\"s4\"> WANG</span></p>\r\n<p class=\"s3\"><span class=\"s4\">E</span><span class=\"s4\">xecutive Produc</span><span class=\"s4\">tion</span><span class=\"s4\">: </span><span class=\"s4\">Hsing-Jou</span><span class=\"s4\"> YEH</span><span class=\"s4\"> (ET@T)</span><span class=\"s4\"> &nbsp; </span></p>\r\n<p class=\"s3\"><span class=\"s4\">Technical Team: C-LAB Taiwan Sound Lab</span></p>\r\n<p class=\"s3\"><span class=\"s4\">A</span><span class=\"s4\">dministrative</span><span class=\"s4\"> Planning</span><span class=\"s4\">: Cecile HUANG</span></p>\r\n<p class=\"s3\"><span class=\"s4\">Technical</span><span class=\"s4\"> Management</span><span class=\"s4\">: </span><span class=\"s4\">Aluan</span><span class=\"s4\"> WANG</span></p>\r\n<p class=\"s3\"><span class=\"s4\">Visual Design: Yu-</span><span class=\"s4\">Jie</span><span class=\"s4\"> HUANG</span></p>\r\n<p class=\"s3\"><span class=\"s4\">S</span><span class=\"s4\">ound Designer: Chi-You DEAN, Yu-De LIN (Tainan National University of the Arts)</span></p>\r\n<p class=\"s3\"><span class=\"s4\">Produ</span><span class=\"s4\">ction Assistant</span><span class=\"s4\">: Hsiao-Ting CHU</span></p>",
        "topics": [],
        "user": {
            "pk": 31229,
            "forum_user": {
                "id": 31182,
                "user": 31229,
                "first_name": "Tom",
                "last_name": "Debrito",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d239346e0c19ec2b960555378b5fe912?s=120&d=retro",
                "biography": "Tom Debrito was the Events Coordination Manager of the IRCAM Forum for the year 2022-2023, as part of a work-study contract.\n\nHe was in charge of the coordination of the Forum Workshops 2022 with the New York University, the Forum Workshops 2023 in Paris and the Forum Workshops 2023 in Taipei in collaboration with the C-LAB. In addition, he handles communication and marketing related tasks to help the development of the IRCAM Forum.",
                "date_modified": "2023-10-30T12:25:43.859854+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 389,
                        "forum_user": 31182,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "debrito",
            "first_name": "Tom",
            "last_name": "Debrito",
            "bookmarks": []
        },
        "slug": "throbbing-sonic",
        "pk": 2067,
        "published": true,
        "publish_date": "2023-02-15T17:00:18+01:00"
    },
    {
        "title": "Comment autoriser un plugin payant ?",
        "description": "Suivez ce tutoriel pour autoriser un plugin payant.",
        "content": "<div class=\"title\">\r\n<div class=\"title\">\r\n<h1>Autoriser un plugin payant</h1>\r\n</div>\r\n<div class=\"content\">\r\n<p>1. T&eacute;l&eacute;chargez le fichier d'autorisation depuis <a href=\"https://forum.ircam.fr/shop/en/profile\">votre profil IRCAM Shop</a> apr&egrave;s avoir effectu&eacute; l'achat ou activ&eacute; l'abonnement.</p>\r\n<p>2. Dans le plugin, ouvrez le menu d&eacute;roulant principal et s&eacute;lectionnez Autoriser (ou cliquez sur le bouton d'avertissement D&eacute;mo).</p>\r\n<p>3. Cliquez sur Autoriser , puis s&eacute;lectionnez le fichier d'autorisation t&eacute;l&eacute;charg&eacute; (le glisser-d&eacute;poser est &eacute;galement pris en charge).</p>\r\n<p class=\"p1\"><img src=\"/media/uploads/asap-authorize.png\" alt=\"\" width=\"535\" height=\"305\" /></p>\r\n<p>Une fois le plugin valid&eacute;, les indicateurs de d&eacute;monstration disparaissent et l'entr&eacute;e de menu Autoriser devient inactive. La validation peut prendre quelques secondes.</p>\r\n<p>Si vous &ecirc;tes membre Premium, n'h&eacute;sitez pas &agrave; contacter <a href=\"https://forum.ircam.fr/contact\">le service client&egrave;le</a> pour toute question.</p>\r\n</div>\r\n<div>\r\n<h1></h1>\r\n</div>\r\n<div></div>\r\n</div>",
        "topics": [
            {
                "id": 925,
                "name": "ASAP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 291,
                "name": "Howto",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 35,
                "name": "Meta-forum",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "comment-autoriser-un-plugin-payant",
        "pk": 2054,
        "published": true,
        "publish_date": "2023-10-02T13:30:45+02:00"
    },
    {
        "title": "Here's the Information We Collect by Tansy Xiao",
        "description": "Here's the Information We Collect is an interactive video installation tailored to respond to selected privacy policy on major social media platforms. The audience members are invited to engage with the work by speaking into a microphone. Their words will be processed by a pre-coded speech recognition program to match the key words to specific sonic elements performed by professional vocalists, creating a dynamic and evolving musical score in real-time. The project employs the privacy policy of a particular cyber enterprise as an entry point to explore the implications of our online data and the tension between privacy, surveillance, and the free flow of information in the digital age. It also calls attention to tech corporations' collection and capitalization of user data behind the scenes as a form of digital colonialism.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"yui_3_17_2_1_1738864122159_137\">\r\n<div id=\"block-yui_3_17_2_1_1682696842303_20191\">\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c4968969fc61408144874700479fb90d.jpg\" width=\"1344\" height=\"756\" /></p>\r\n<p>Presented by : Tansy Xiao</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/nitrocaphane/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p><em>Here's the Information We Collect</em><span>&nbsp;</span>is a multi-channel interactive video installation tailored to respond to selected privacy policy on major social media platforms. The audience members are invited to engage with the work by speaking into a microphone. Their words will be processed by a pre-coded speech recognition program to match the key words to specific sonic elements performed by professional vocalists, creating a dynamic and evolving musical score in real-time. The project employs the privacy policy of a particular cyber enterprise as an entry point to explore the implications of our online data and the tension between privacy, surveillance, and the free flow of information in the digital age. It also calls attention to tech corporations' collection and capitalization of user data behind the scenes as a form of digital colonialism.</p>\r\n<p><img src=\"https://images.squarespace-cdn.com/content/v1/5a3773128dd041ba97751f26/732f6900-c56d-48a8-ae56-20bd3b1c40fb/notation.jpg\" width=\"1647\" height=\"512\" /></p>\r\n<p>This composition employed VOSK Offline Speech Recognition API to activate vocal performances. The musical notation was presented graphically, and the vocalists' contributions were recorded individually before being integrated in real-time. &nbsp;<a href=\"https://tansyxiao.com/s/Heres-the-Information-We-Collect_score.pdf\">Download full score</a></p>\r\n<div id=\"block-yui_3_17_2_1_1698477439277_4495\">\r\n<div>\r\n<div>\r\n<p>Ekmeles Vocal Ensemble - Soprano: Charlotte Mundy, Mezzo-Soprano: Elisa Sutherland Countertenor: Jonathan May, Tenor: Tom&aacute;s Cruz, Baritone: Jeffrey Gavett, Bass: Peter Stewart; Max/MSP Engineer: Matthew Ostrowski, Recording Engineer: Kevin Ramsay. Documentation captured during the residency at the Institute for Electronic Arts with the support of Media &amp; Technology Technician Bernard Dolecki.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div id=\"block-yui_3_17_2_1_1710104078093_6061\">\r\n<div>\r\n<div>\r\n<p>An online single-channel iteration of the project commissioned by<span>&nbsp;</span><a href=\"https://websoundart.org/\">WebSoundArt</a><span>&nbsp;</span>is now accessible online through<span>&nbsp;</span><a href=\"https://heres-the-information-we-collect.com/\" target=\"_blank\">this link</a>.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div id=\"yui_3_17_2_1_1738864122159_137\">\r\n<div id=\"block-yui_3_17_2_1_1682696842303_20191\">\r\n<p>**This project was produced in part at<span>&nbsp;</span><a href=\"https://www.harvestworks.org/\" target=\"_blank\">Harvestworks Digital Media Arts Center</a><span>&nbsp;</span>through the Artist-In-Residence Program, and is sponsored in part by the Greater New York Arts Development Fund of the New York City Department of Cultural Affairs, administered by<span>&nbsp;</span><a href=\"https://www.brooklynartscouncil.org/\" target=\"_blank\">Brooklyn Arts Council</a><span>&nbsp;</span>(BAC).</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 2594,
                "name": "data privacy",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2596,
                "name": "graphic notation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2592,
                "name": "natural language processing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2593,
                "name": "speech-recognition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2595,
                "name": "vocal",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 33821,
            "forum_user": {
                "id": 33774,
                "user": 33821,
                "first_name": "Tansy",
                "last_name": "Xiao",
                "avatar": "https://forum.ircam.fr/media/avatars/typewriter.png",
                "avatar_url": "/media/cache/92/9f/929fc608df8b5270419fbadfad37c2c6.jpg",
                "biography": "Tansy Xiao is an artist, curator, and writer based in New York. Undertaking interdisciplinary collaborations involving human participants, technological systems, and non-anthropogenic organisms, Xiao creates theatrical installations with non-linear narratives. Her work explores the immense power and inherent inadequacy of language through the assemblage of stochastic audio and recontextualized objects. She finds solace in the unknown, ludicrousness in the authorities, and absurdity in the geopolitical demarcations that separate and differentiate people.\n\nXiao’s work has been shown at Queens Museum, New Media Caucus, Piksel Festival, Sound Scene at Hirshhorn Museum, Torrance Art Museum, NARS Foundation, HASTAC Conference, UKAI Projects, The American Society for Theatre Research, University of Porto, Osaka University of Art, Taipei Digital Arts Festival, WIP Arts and Technology Festival, New Adventures in Sound Art, Pelham Art Center, among others. She has received grants and support from NYSCA Electronic Media & Film | Wave Farm, Brooklyn Arts Council, Foundation for Contemporary Arts and Harvestworks Digital Media Arts Center.",
                "date_modified": "2025-03-28T15:11:17.904483+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nitrocaphane",
            "first_name": "Tansy",
            "last_name": "Xiao",
            "bookmarks": []
        },
        "slug": "heres-the-information-we-collect-by-tansy-xiao",
        "pk": 3255,
        "published": true,
        "publish_date": "2025-02-06T18:56:19+01:00"
    },
    {
        "title": "New Tuning Theory/Practice",
        "description": "retraction",
        "content": "<p>I completely retract last night's submission to you by the same name .... It was a mirage!! At least we have the 4/3 and 3/2 of current system... regards Flartec</p>",
        "topics": [],
        "user": {
            "pk": 17661,
            "forum_user": {
                "id": 17657,
                "user": 17661,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7356ec9886128a3b915cfe90fc832be6?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-11-18T10:39:32.702791+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "flartec",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "new-tuning-theorypractice-1",
        "pk": 441,
        "published": false,
        "publish_date": "2020-01-18T21:12:57+01:00"
    },
    {
        "title": "The Creative Contract - Mamoru Watanabe, Domenica Landin",
        "description": "L'atelier sur la collaboration audiovisuelle à la salle Shannon le 21 mars 2024.",
        "content": "<p><span><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par :&nbsp;Mamoru Watanabe, Domenica Landin<br /><a href=\"https://forum.ircam.fr/profile/mwatanabe/\">Biographie Mamoru Watanabe</a><br /></span></p>\r\n<p><span></span></p>\r\n<p>L'essor des m&eacute;dias sociaux (par exemple, YouTube) a fait des vid&eacute;os musicales un outil pr&eacute;cieux pour la promotion de la musique et l'engagement du public. Si cette &eacute;volution a ouvert des possibilit&eacute;s de collaboration entre musiciens et artistes visuels, la demande d'&oelig;uvres visuelles ne cesse de cro&icirc;tre. Pour les artistes &eacute;mergents disposant d'un budget limit&eacute; ou nul, le moyen le plus courant de r&eacute;pondre aux normes de l'industrie est l'&eacute;change de capital cr&eacute;atif (comp&eacute;tences + aptitudes + temps = production cr&eacute;ative). Sur cette base, les cr&eacute;ateurs s'engagent dans une collaboration fond&eacute;e sur la camaraderie. Cependant, les dynamiques de pouvoir, les diff&eacute;rences culturelles et les pratiques extractives entravent souvent ces collaborations. Pour y rem&eacute;dier, nous proposons un <em>contrat cr&eacute;atif</em> renouvel&eacute; - un accord convivial fond&eacute; sur le respect mutuel et la r&eacute;ciprocit&eacute;. Nous discutons des &eacute;changes cr&eacute;atifs entre musiciens et artistes visuels en d&eacute;but de carri&egrave;re, des questions &eacute;thiques qui se posent au cours de ces &eacute;changes et nous proposons des suggestions pour &eacute;tablir des accords de collaboration plus harmonieux et plus durables.</p>\r\n<p>Dans le cadre d'un atelier de design participatif, nous invitons les participants &agrave; r&eacute;diger leurs contrats cr&eacute;atifs tout en cocr&eacute;ant une couverture sp&eacute;culative. Le processus suit une s&eacute;rie d'invites et utilise des outils sp&eacute;cialement con&ccedil;us pour soutenir et guider les participants lorsqu'ils prennent des d&eacute;cisions ensemble. L'atelier guide les participants &agrave; travers les accords de collaboration et les aide &agrave; identifier les pratiques de collaboration non extractives. L'atelier est con&ccedil;u pour durer 75 minutes. Les participants ne sont pas tenus d'avoir une exp&eacute;rience des pratiques audiovisuelles ou multim&eacute;dias, mais il est pr&eacute;f&eacute;rable qu'ils s'int&eacute;ressent aux pratiques cr&eacute;atives collaboratives.</p>\r\n<p></p>\r\n<p><span><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></span></p>",
        "topics": [
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1096,
                "name": "workshop",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27887,
            "forum_user": {
                "id": 27859,
                "user": 27887,
                "first_name": "Mamoru",
                "last_name": "Watanabe",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG-20231222-WA0001.jpg",
                "avatar_url": "/media/cache/6d/7a/6d7acf046a76bc210a4cdf0f7a91ab8c.jpg",
                "biography": "Mamoru Watanabe (b.1992, Tokyo, Japan) is an artist and PhD candidate at the University of Bristol, UK. He is currently conducting research on ‘Synaesthesia’ in the context of Human-Computer Interaction (HCI) under the supervision of Prof. Atau Tanaka and Dr. Oussama Metatla.\nHis research interests revolve around audiovisual/visual music culture, and interaction with digital media for bodily perception, transcendence, and imagination. Alongside his research, he engages with multimedia and audiovisual practices both individually and collaboratively with other practitioners.",
                "date_modified": "2024-04-14T10:07:55.781651+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "mwatanabe",
            "first_name": "Mamoru",
            "last_name": "Watanabe",
            "bookmarks": []
        },
        "slug": "the-creative-contract",
        "pk": 2820,
        "published": true,
        "publish_date": "2024-03-11T10:30:15+01:00"
    },
    {
        "title": "Family Life - recomposed - Yann COPPIER",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p style=\"text-align: justify;\">Family life, and children in general are commonly associated with chaos and unsolicited noise. Something sound artists themselves mostly try to avoid, as the worst place for them to work. This 16-loudspeaker installation, in opposition, aims at celebrating the musicality in everyday life, where intimacy and brutality, tensions and resolutions, power and abandonment are all intrinsically linked together.</p>\r\n<p style=\"text-align: justify;\">24 hours in the life of a family of four were thus recorded through a network of omnidirectional microphones set in an apartment located on a pedestrian street of Copenhagen, early 2022. The technique used doesn&rsquo;t aim at recording people, instruments nor objects in a defined space though, but the space itself including its full acoustics in which, incidentally, people, instruments and/or objects might be interacting. Making it counter-intuitively a non-anthropocentric work, in which the family is merely inhabiting a living space.</p>\r\n<p style=\"text-align: justify;\">The final piece re-composes the apartment on 16 loudspeakers in a big empty room. It is a full 3D celebration of the musicality in everyday life, where intimacy and brutality, tensions and resolutions, power and abandonment are all intrinsically linked together. Performed in a recreated apartment (using tape on the floor to show where the walls are), it allows the audience to experience a most intimate world from the apartment&rsquo;s point of view, everywhere and nowhere at the same time, as it seemingly encapsulates everything, even sounds from the outside, in a 100m2 box filled with emotions&hellip;</p>\r\n<p style=\"text-align: justify;\">In January 2022 a first highly immersive version which blended discussions, drama and music, both instrumental and spectral, was created at Inter Arts Center, Malm&ouml;, Sweden. As the most complex dramaturgies unfolded on an everyday basis, in a place few people external to the inner family circle are allowed to peek, it pushed participants to walk among &ldquo;ghosts&rdquo;: other people&rsquo;s memories seemed to walk literally through them &ndash; or was the audience reduced to ghosts itself?</p>\r\n<p style=\"text-align: justify;\">The project is now developing through a second residence in March 2023, with a clear goal to expand its emotional content, develop its concept and present it in front of a wider audience. Also, placing the family (or the apartment in which the family lives) in the center of the art piece implies a potential radical change in the way we as a society consider our ties to what we could call our inner and outer lives, maybe by joining them &ndash; at least for some time.</p>\r\n<p style=\"text-align: justify;\">To be noted the installation was created using a specific recording technique developed during three years of artistic research (<a href=\"https://www.researchcatalogue.net/view/820939/821146/0/0\">'Absurd Sounds', JAR edition 23</a>), which will be presented with examples. It was finally rendered to dynamic binaural sound using sensors combined with SPAT Revolution, allowing its audience to become ghosts in a living &ndash; yet invisible &ndash; family, or to witness memories flying around them. Here SPAT was used to prototype, test, document and finally transpose the piece to new spaces.</p>",
        "topics": [
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1162,
                "name": "Invisible Choreography",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1167,
                "name": "non-anthropocentric art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1166,
                "name": "re-composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 262,
            "forum_user": {
                "id": 262,
                "user": 262,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/10f5f452ee60b2a242dff4ac9ac4b4a1?s=120&d=retro",
                "biography": "Yann Coppier (FR) is a sound artist, producer, performer and composer based in Copenhagen, Denmark. \n\nBesides his personal projects within the fields of music and sound art, theatre, dance or film, he has been head of the Sound Line at The Danish National School of Performing Arts from 2014 to 2020. There he developed an ambitious artistic research project with support from the Danish Ministry of Culture. Making extensive use of the absurd, in his research sonic dramaturgy – and not technology - occupies the primary focus, in an attempt to restore and develop the lost meaning of sounds and to open an alternative field of artistic investigation.\n\nHe released a massive publication in Journal For Artistic Research (JAR) about \"Absurd Sounds\" in 2021, focusing on a creative method based on ideas rather than technology. There he develops 3 case studies, each one answering its own absurd question: What if silence were louder than noise? What if there were no sweet spot? What if there were no (virtual) reality?\n\nHis recent projects often feature complex multichannel compositions in the real and virtual world, in which he plays with our perception of what sound is and what it could become.",
                "date_modified": "2025-01-27T01:01:32.394306+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 39,
                        "forum_user": 262,
                        "date_start": "2012-11-22",
                        "date_end": "2024-03-20",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "YannCOPPIER",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "family-life-recomposed",
        "pk": 2070,
        "published": true,
        "publish_date": "2023-02-16T19:39:37+01:00"
    },
    {
        "title": "Public Intimacy by Sylvain Souklaye",
        "description": "Public Intimacy is a motion- and site-specific sonic experience that reimagines the relationship between sound, space, and human presence through binaural techniques. By capturing and manipulating live environmental sounds, bodily movements, and architectural acoustics, the project examines how sonic hierarchies dictate spatial and social interactions based on the politics of noise, making questions of who is heard or silenced tangible. Through improvisation and real-time sound processing, Public Intimacy transforms public spaces into sites of collective intimacy, where deep listening becomes an act of resistance and reconnection.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/solitary_venice.jpg\" alt=\"\" width=\"723\" height=\"904\" />&nbsp; &nbsp;<img src=\"https://forum.ircam.fr/media/uploads/soliloquy_in_motion_(live_at_flux_factory's_residency_on_governor's_island_october_2023).jpg\" alt=\"\" width=\"903\" height=\"903\" />&nbsp;&nbsp;</p>\r\n<p>By : Sylvain Souklaye</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/sylvainbk/\" target=\"_blank\">Biography</a></p>\r\n<p>Public Intimacy is a motion- and site-specific sonic experience that reimagines the relationship between sound, space, and human presence through binaural techniques. Public Intimacy explores how collective intimacies emerge within public environments, transforming them into sensitive narrative spaces.</p>\r\n<p>At its core, Public Intimacy is both a system and a philosophy&mdash;an evolving performance framework that investigates the power dynamics within the &ldquo;politics of noise.&rdquo; The piece examines how sonic hierarchies dictate spatial and social interactions by capturing and manipulating live environmental sounds, bodily movements, and architectural acoustics. Who is heard? Who is silenced? How do different bodies claim or resist space through sound? Public Intimacy makes these questions tangible through improvisation and real-time sound processing, constructing an immersive and hyper-localized auditory world. Binaural recording techniques heighten this experience, amplifying the nuances of human interaction and the spatial resonance of the site.</p>\r\n<p>Rather than adhering to a fixed composition, Public Intimacy adapts dynamically to its surroundings. Participants become active agents in shaping the soundscape, blurring the boundaries between performer and audience. Each whisper, breath, and movement contributes to a fluid interplay of sonic textures, fostering a deep sense of shared vulnerability and sensory dialogue. In this way, the project transforms public spaces into sites of collective intimacy, where architecture ceases to be a static backdrop. Instead, it becomes a living, breathing entity that reveals the often-unnoticed tensions of sonic power.</p>\r\n<p>This presentation will delve into the methodologies behind Public Intimacy, exploring the role of improvisation, binaural sound, and spatial engagement in crafting new modes of auditory and performative experience. By addressing the politics of noise, the project invites audiences to reconsider how power operates through sound, questioning whose voices shape our environments and how deep listening can become an act of resistance and reconnection.</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 97928,
            "forum_user": {
                "id": 97806,
                "user": 97928,
                "first_name": "Sylvain",
                "last_name": "Souklaye",
                "avatar": "https://forum.ircam.fr/media/avatars/Flat-Noise-1400x1400.jpg",
                "avatar_url": "/media/cache/24/92/2492f72027428cd4ea0d31c13d70527b.jpg",
                "biography": "Sylvain Souklaye is a French Caribbean Brooklyn-based live artist, sonic maker, and author. His work explores the interiority of broken bodies, environmental urgencies, and political retribution beyond questions of identity. Rooted in DIY social justice, his early performances merged poetry, radical visual actions, and extreme happenings.\n\nHis durational live radio show at RCT in France laid the foundation for his immersive sonic experiences, which later evolved into writings (Le jour du fléau, Solus) and noise-driven installations. Using granular synthesis and binaural techniques, he crafts live experiences in which audiences become active participants in collective intimacy and epigenetic dialogue.\n\nHis notable works include Depopulated (Judson Church, Momentary), UNDERMY-YOUR-OURSKIN (ChaShaMa Gala, Grace Exhibition Space), Black Breathing (Kunsthalle am Hamburger Platz, CICA), and Liquid Soul (Helsinki Central Library Oodi). The Aesthetica Art Prize recognized him as one of the Top 100 Contemporary Artists. He co-hosts Conversations From the Center and is a commissioned artist for the International Contemporary Ensemble & Jerome Foundation (2022–2024) and a Harvestworks fellow.",
                "date_modified": "2025-08-05T07:37:02.858069+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sylvainbk",
            "first_name": "Sylvain",
            "last_name": "Souklaye",
            "bookmarks": []
        },
        "slug": "public-intimacy-by-sylvain-souklaye",
        "pk": 3300,
        "published": true,
        "publish_date": "2025-02-19T16:33:20+01:00"
    },
    {
        "title": "R-IoT v3 : un rapport d'étape - Emmanuel FLETY, Prototypes & Engineering Team (PIP)",
        "description": "Présentation de la version 3 du capteur IMU sans fil R-IoT lors de l'édition 2024 du workshop de l'IRCAM, par Emmanuel FLETY, Prototypes & Engineering Team (PIP)",
        "content": "<p><a href=\"https://forum.ircam.fr/agenda/save-the-date-ateliers-du-forum-2024-edition-des-30-ans/detail/\"><img src=\"https://forum.ircam.fr/media/uploads/thumbs/bandeaux_articles.png/bandeaux_articles-990x330.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></a></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par:&nbsp;Emmanuel FLETY, Prototypes &amp; Engineering Team (PIP)<br /><a href=\"https://forum.ircam.fr/profile/flety/\">Biography</a></p>\r\n<p>La plateforme de capteurs sans fil R-IoT est une petite carte &eacute;lectronique int&eacute;grant des capteurs de mouvement 3D &agrave; 9 axes et un microcontr&ocirc;leur sans fil con&ccedil;u pour cr&eacute;er des syst&egrave;mes de d&eacute;tection gestuelle bas&eacute;s sur l'Open Sound Control pour la recherche, l'analyse du mouvement et les arts de la sc&egrave;ne, les contenus interactifs et l'&eacute;lectronique en direct. Nous pr&eacute;sentons la derni&egrave;re version de la carte comme un rapport d'avancement du d&eacute;veloppement ainsi qu'une d&eacute;mo et les strat&eacute;gies &agrave; venir pour la fabrication et la diffusion publique.</p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/be7e80cd2952c06e080f5d8296921798.jpg\" /></p>\r\n<p>&nbsp;<strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1914,
                "name": "Gestural sensing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 244,
                "name": "Open sound control",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 100,
                "name": "Sensor",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1913,
                "name": "WIFI",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1912,
                "name": "Wireless IMU",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 9326,
            "forum_user": {
                "id": 9323,
                "user": 9326,
                "first_name": "Emmanuel",
                "last_name": "Flety",
                "avatar": "https://forum.ircam.fr/media/avatars/Flety_head3-removebg-preview1.png",
                "avatar_url": "/media/cache/61/45/614512f523bf49d2cd4c77f66e864c03.jpg",
                "biography": "Emmanuel FLETY is an electronics engineer at IRCAM and is in charge of the PIP Engineering and Prototype Team.  \nA specialist in embedded electronics, he has developed over the past twenty years expertise in digitization and acquisition \ninterfaces for miniaturized wireless sensors with low latency. \nThese are critical tools in the fields of motion capture and recognition, as well as in the creation of new gestural interfaces \nfor music and digital lutherie.  \n\nIn 2005, alongside his work at the Institute, he founded his own company, Plecter Labs, where he explores possible connections \nbetween the design of microcontroller boards and replicas of cinema props, thereby investigating tangible relationships between movement, \nsound, and light. A maker at heart, he enjoys exploring the poetic expression offered by unique interactive objects through a hands-on, \nartisanal approach.",
                "date_modified": "2025-02-27T11:24:46.657150+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 153,
                        "forum_user": 9323,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "flety",
            "first_name": "Emmanuel",
            "last_name": "Flety",
            "bookmarks": []
        },
        "slug": "r-iot-v3-a-progress-report",
        "pk": 2841,
        "published": true,
        "publish_date": "2024-03-19T10:14:09+01:00"
    },
    {
        "title": "Infinite Coastlines: diagrammatic scores, cybernetic patches, and agency in performance by Eliad Wagner",
        "description": "Infinite Coastlines is a composition–performance paradigm for working with agential systems. Initially developed by Eliad Wagner as a personal formalism for modular synthesizer practice, it functioned as a working method for producing identifiable and repeatable musical forms¹. The core interest is to locate clear human agency in the performance of electronic music by affording agency to the instrumental system as well. Agency here is not understood as total control, nor as the mere presence of generative processes, but as the capacity to steer a complex situation, to recognise emerging tendencies, and to decide when to stabilise, redirect, or let a process unfold. In this sense, Infinite Coastlines proposes an epistemological stance, a way of knowing, making, and performing music through structured interaction with technological behaviour. Eliad Wagner and Benjamin Bacon present, at IRCAM Forum, an expansion of the paradigm through the question of translation, and how its principles might apply to other electronic instrument systems.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<h3><img alt=\"performance of Infinite Coastlines 3 in Berlin, 2024 by Eliad Vagner. Photo: Kasia Mazur\" src=\"https://forum.ircam.fr/media/uploads/user/c8a5005e8b01a4f30673fc7e7c8d4417.jpg\" /></h3>\r\n<p>The modular synthesiser resembles an organisational system more than a clearly defined instrument. Its identity is highly malleable, describing a momentary rationale for signal flow rather than a stable set of timbral boundaries and playing techniques. While one can recognise trends and idioms across modular synthesiser culture, each system ultimately remains idiosyncratic, shaped by personal choices, available modules, and practical constraints. This makes compositional models or ideas difficult to transfer, since there is no shared instrumental ground on which any formalism might reliably operate.</p>\r\n<p>Recording technology has long offered us one possible pathway: rather than composing a reproducible work, one can record performances and later arrange the captured material. Yet this risks losing the formalism that generates the artefact, particularly when patches, signal relationships, modes of playing, and interface decisions are not captured in a durable way. Furthermore, it places electronic music in a fairly constrained box regarding a composition's flexibility - how far from a recording can alternative performances venture while retaining its core musical ideas? The question becomes how to articulate, preserve, and transmit the conditions under which a musical form emerges.</p>\r\n<p>On the other hand, performance adds another set of challenges. Synthesiser music performance is often timbre-based and parameter-dense. Performance requires the simultaneous control over more dimensions than a single performer can reliably manage. The precise recreation of phrases and sounds is often impractical and performances of the same idea tend to become a singular realisation. Automation can assist here, but runs the risk of undermining the performer&rsquo;s role unless runless responsibility and risk are explicitly composed. In such a context, playability is not guaranteed by the instrument&rsquo;s physical design. It must be composed.</p>\r\n<h3>Methodology</h3>\r\n<p>Infinite Coastlines meets these challenges by centering composition and performance around two intertwined components.</p>\r\n<p>First, a specific patch serves as the machine agent. It is not merely a sound source but a generator of musical behaviours, configured so that its internal dynamics present the performer with evolving circumstances that demand attention, judgement, and response.</p>\r\n<p>Second, a prescribed process of playing serves as the human agent&rsquo;s framework. It provides an orienting metaphor that frames exploration as a form of wayfinding, and treats improvisation as a disciplined mode of listening and decision-making within constraints. Together, patch and process establish a performance situation in which each realisation can differ, while remaining recognisable through shared behavioural tendencies, recurring landmarks, and a bounded range of outcomes.</p>\r\n<h3>The patch</h3>\r\n<p>The patch below is one instantiation of the approach, as used in <em>Infinite Coastlines 3</em>, the most recent and most developed iteration of the paradigm so far&sup2;. It is a self-regulating structure comprising four voice sub-patches (color-coded) and a modulation network, a collection of function generators, sample-and-hold processes, and shift registers that are not owned by any single voice. Audio and control signals are treated reciprocally across the system.</p>\r\n<p><img alt=\"Infinite Coastlines 3 patch (Eliad Vagner 2024)\" src=\"https://forum.ircam.fr/media/uploads/user/f604656d9ec5861fd48280db3708775c.png\" /></p>\r\n<p>The patch employs several strategies intended to produce behaviours that can stimulate the performer and invite response. Feedback patching plays a central role. By routing signals into non-linear processes and back into control structures, the system can exhibit behaviours associated with deterministic chaos: sensitive dependence on initial conditions, aperiodic but bounded motion, and quasi-repeating motifs. In Infinite Coastlines, these behaviours are not treated as scientific objects to be measured, but as musical phenomena to be recognised, navigated, and shaped through listening.</p>\r\n<p>A further aspect of the design is the extraction of higher-order information from signals and its reintroduction into the network. Envelope-following, slope detection, slew limiting, and sampling devices provide ways of attending to change, memory, and resolution within the system. The patch becomes less like a linear signal chain and more like an ecology of interacting processes, with emergent structures that feel related through self-similarity across time scales.</p>\r\n<h3>Metaphor</h3>\r\n<p>The playing process is articulated through spatial metaphors. The patch generates a &ldquo;Sound Terrain,&rdquo; which is understood as the bounded range of sound behaviours possible within the current system. The performer explores this terrain by tracing &ldquo;Musical Pathways,&rdquo; moving between states and exploring transitions.</p>\r\n<p>Because time is one of the dimensions being navigated, it is useful to distinguish between two temporalities. &ldquo;Pathway time&rdquo; describes formal structure, such as the duration of a traversal, the decision to dwell, and the pacing of transitions. &ldquo;Terrain time&rdquo; refers to temporal behaviours that are intrinsic to a state, such as modulation rhythms or clock and trigger rates. This distinction supports a practical compositional awareness: &ldquo;terrain time&rdquo; can itself be treated as a navigable dimension, slowed down, sped up, stabilised, or held constant while other dimensions of the &ldquo;Sound Terrain&rdquo; are explored.</p>\r\n<p>Within the &ldquo;Sound Terrain,&rdquo; &ldquo;Landmarks&rdquo; anchor perception and orientation. In <em>Infinite Coastlines 3</em>, these &ldquo;Landmarks&rdquo; correspond to principal voices or salient behavioural regions of the patch, recognisable enough to function as reference points during performance. These landmarks anchor&nbsp; the performer&rsquo;s wayfinding. As opposed to the execution of fixed phrases, the performer&rsquo;s work becomes one centered around the deliberate shaping of routes, perspectives, and durations through a bounded environment whose details remain responsive in the moment.</p>\r\n<h3>Diagram as score, score as map</h3>\r\n<p>In hardware modular practice, patches are not stored as files, so a diagram becomes a practical tool for reconstruction and transmission. In Infinite Coastlines, diagrammatic notation develops further into a compositional device. The patch is captured in a diagram that the musician can reconstruct, but the same diagram also operates as an operational map of the Sound Terrain, showing not the terrain&rsquo;s surface but the mechanisms that generate it. It specifies assembly (an operational prescription) and describes a navigable sound-world, including landmarks and relations. In this sense, the score is not only representational, but also infrastructural. It builds the conditions under which forms can emerge, and it provides reference points for locating the performer within those conditions.</p>\r\n<h3>Grammatisation: practice as the production of a gesture vocabulary</h3>\r\n<p>In Infinite Coastlines, performance depends on the process of developing a working memory for a grammar of gestures, or grammatisation&sup3;. Emphasising practice in this way signals that the composition is to be recreated in performance, where recognisable form is carried through rehearsal rather than preserved through exact reproduction. This grammar is collected through practice and rehearsal, as the performer explores the Sound Terrain by playing the synthesiser and develops physical and gestural memory for particular behaviours and responses.</p>\r\n<p>The performer learns to differentiate direct controls (predictable, local effects such as direct changes to level or filtering) from entangled controls (distributed influences that cascade through multiple processes). Over time, practice produces iterable gestural sets that can be recalled, varied, and recombined.</p>\r\n<p>In Infinite Coastlines, improvisation can become compositional precisely because it yields a grammar, stabilising form without fully determining it in advance.</p>\r\n<h3>Technology as collaborator, agency as a compositional problem</h3>\r\n<p>Infinite Coastlines treats agency not as a property located in either the performer or the instrument, but as something produced through their coupling. The aim is not to automate intention, but to construct situations in which technological behaviour can be negotiated. This reframes technology as a collaborator in a practical sense.</p>\r\n<p>Approached this way, the paradigm is not tied to modular synthesis. It can be translated to other electronic instruments and systems, provided they support a comparable interplay between system behaviour, an orienting metaphor for navigation, and the rehearsal of an embodied gesture grammar.</p>\r\n<h3>Footnotes</h3>\r\n<p>&sup1; This focus is not a normative criterion for musical value, but a deliberate constraint that opens questions about notation, memory, and performance in electronic practice.</p>\r\n<p>&sup2; The paradigm can be realised with different patches, as long as they support self-regulation, emergent behaviour, and meaningful negotiation by a performer. In the workshop, co-led with Benjamin Bacon, we also demonstrate a translation of the paradigm to an instrument built around the RAVE neural network.</p>\r\n<p>&sup3; Stiegler defines grammatisation as &ldquo;the process through which flows and continuities which weave our existences are discretized&rdquo; (Stiegler, <em>For a New Critique of Political Economy</em>, 2010, p. 31).</p>",
        "topics": [
            {
                "id": 4213,
                "name": "agency",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4215,
                "name": "cybernetics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4214,
                "name": "modular synthesizer",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 130,
                "name": "Performance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 316,
                "name": "Score",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 110693,
            "forum_user": {
                "id": 110553,
                "user": 110693,
                "first_name": "Catalyst Institute for Creative Arts and Technology",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Eliad_Wagner_4_partial_by_Karolina_Gembara.jpg",
                "avatar_url": "/media/cache/1c/32/1c3240d3ed803aa1f72c753dce55295c.jpg",
                "biography": "Eliad Wagner is a composer, performer, and educator. He holds a BSc in Physics from the Hebrew University of Jerusalem and an MMus in Composition and Music Technology from the Utrecht School of the Arts. His work explores performance with agential processes, trans-stylistic vocabularies, and cosmotechnics.\n\nWagner performs primarily with the modular synthesizer and composes for solo performance, ensembles, and installation contexts. He is a co-founder and regular contributing composer of the electroacoustic ensemble Circuit Training. Since 2015, he has led the B.A. in Electronic Music Production and Performance at Catalyst Institute for Creative Arts and Technology in Berlin.\n\nHis music appears on labels such as Superpang, Room40, and Fält, with work presented by MIT Press, RSB Berlin, HKW, STEIM, and the Guggenheim, among others.",
                "date_modified": "2026-02-27T11:53:31.801251+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1130,
                        "forum_user": 110553,
                        "date_start": "2025-05-06",
                        "date_end": "2026-05-06",
                        "type": 1,
                        "keys": [
                            {
                                "id": 815,
                                "membership": 1130
                            },
                            {
                                "id": 823,
                                "membership": 1130
                            }
                        ],
                        "type_string": null,
                        "num_keys": 20,
                        "is_valid": true
                    }
                ]
            },
            "username": "catalystlicenses",
            "first_name": "Catalyst Institute for Creative Arts and Technology",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "infinite-coastlines-diagrammatic-scores-cybernetic-patches-and-the-problem-of-agency",
        "pk": 4339,
        "published": true,
        "publish_date": "2026-02-10T13:59:13+01:00"
    },
    {
        "title": "Paysage sonore 3D illusoire et immersif - \"Glacier\" - Zoe Lin",
        "description": "La perte globale des glaciers a un impact sur diverses régions, les effondrements en mer générant des vagues d'avertissement, tandis qu'une composition utilise des paysages sonores immersifs en 3D pour symboliser l'impact humain et brouiller les frontières artistiques.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Present&eacute; par :&nbsp;Zoe Lin&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/ZoeLin/\">Biographie<br /><br /></a></p>\r\n<p>Depuis le d&eacute;but du XXe si&egrave;cle, les glaciers du monde entier n'ont cess&eacute; de diminuer, affectant diverses r&eacute;gions, des Alpes et de l'Himalaya au Groenland. L'Antarctique, le plus grand r&eacute;servoir de glaciers de la plan&egrave;te, est confront&eacute; &agrave; l'amincissement de la glace et &agrave; l'effondrement des plateaux.</p>\r\n<p><br />Chaque ann&eacute;e, les glaciers d&eacute;versent un volume colossal de 46 kilom&egrave;tres cubes de glace, accompagn&eacute; de grondements assourdissants. Les effondrements de glaciers li&eacute;s &agrave; la mer impliquent que des plateaux de glace massifs plongent dans l'oc&eacute;an avec une force sismique, g&eacute;n&eacute;rant des vagues semblables &agrave; celles d'un tsunami. La prudence est de mise pour les observateurs &agrave; bord de navires &eacute;loign&eacute;s.</p>\r\n<p><br />Cette composition utilise des paysages sonores immersifs en 3D, cr&eacute;ant des exp&eacute;riences synesth&eacute;siques. Elle d&eacute;peint de mani&egrave;re saisissante la texture des glaciers, leur hauteur, l'&eacute;coulement des glaces flottantes et leur d&eacute;sint&eacute;gration. Au-del&agrave; de l'imagerie hypnotique, elle symbolise la violence humaine et la destruction de notre plan&egrave;te. Ce r&eacute;cit s'&eacute;tend au domaine de la guerre du XXIe si&egrave;cle, exprimant un sentiment apocalyptique. Bien qu'elle soit principalement &eacute;lectronique, elle int&egrave;gre des &eacute;l&eacute;ments de type chorale qui symbolisent le lien entre l'homme et la divinit&eacute;, et aborde les th&egrave;mes de l'apocalypse, de la r&eacute;demption et, dans l'&eacute;l&eacute;gie finale, du pardon.<br />Tout au long de sa cr&eacute;ation, j'ai brouill&eacute; les lignes entre la peinture et la composition, sculptant des sons guid&eacute;s par ma vision artistique int&eacute;rieure. Fermez les yeux pour sentir la texture du glacier, sa temp&eacute;rature, son poids et les vagues qui enveloppent son essence consum&eacute;e.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 8487,
            "forum_user": {
                "id": 8484,
                "user": 8487,
                "first_name": "Zoe (Yi-Cheng)",
                "last_name": "Lin",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/3f7ac14247839fc28146480862faf659?s=120&d=retro",
                "biography": "Zoe Lin is a composer and software engineer, specializing in digital music. Her electronic compositions have achieved international acclaim, featured in 22 prestigious music festivals across 18 countries in Europe, Asia, North, and South America. Zoe holds a doctoral degree in composition from the University of Wisconsin-Madison. Previously, she worked as the Chief Music Officer at an AI music company, leading AI music generation research and development. Currently, Zoe is a full-time composer and part-time instructor at National Taiwan Normal University and Fu Jen Catholic University, teaching interdisciplinary courses that merge music and programming. She specializes in visual-auditory synesthetic electronic music, 3D immersive electronic music composition and mixing, and practical ambisonic system sound projection. Her work has been showcased globally, including events SiMN 2023 (Brazil), MUSLAB 2023 (Ecuador), MiRNArte 2023 (Venice, Italy), SICMF2023 (Seoul, South Korea), NIME 2023 (Mexico), NYCEMF 2023 (New York), Spatial Audio Conference 2023 (UK), NoiseFloor 2023 (UK), with upcoming features in SiMN 2023 and MUSLAB 2023's Phonographic Production - PLANETA COMPLEJO project.",
                "date_modified": "2026-02-23T08:10:15.722734+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ZoeLin",
            "first_name": "Zoe (Yi-Cheng)",
            "last_name": "Lin",
            "bookmarks": []
        },
        "slug": "paysage-sonore-3d-illusoire-et-immersif-glacier-zoe-lin",
        "pk": 2764,
        "published": true,
        "publish_date": "2024-02-21T17:46:11+01:00"
    },
    {
        "title": "Las Pintas",
        "description": "Las Pintas is an immersive audio/video project from José-Miguel Fernandez and Raphaël Foulon",
        "content": "<p>Las Pintas is an immersive audiovisual performance that embarks the audience on a journey through various generative universes. Each of these universes has a particular character: some are chaotic, others contemplative, some stable, others evolving. Each universe is composed of a particle mesh and is ruled by laws that determine the behavior of these particles. These laws can be stochastic, fixed, or follow natural phenomena (fluids mechanics, mass-spring systems, physic models). The performance itself includes live spatialized music and 360 degrees visuals.</p>\r\n<p>&nbsp;</p>\r\n<p>Here are some early artworks generated for Las Pintas :</p>\r\n<table style=\"border-collapse: collapse; width: 100%;\" border=\"1\">\r\n<tbody>\r\n<tr>\r\n<td style=\"width: 50%;\"><img src=\"/media/uploads/user/86a38372e6e3a3e7b4f5b1e1a3c78a14.jpg\" alt=\"\" width=\"400\" height=\"405\" /></td>\r\n<td style=\"width: 50%;\"><img style=\"font-size: 18px;\" src=\"/media/uploads/user/3a84555c4a323598f6db42c5929c223d.jpg\" alt=\"\" width=\"391\" height=\"394\" /></td>\r\n</tr>\r\n</tbody>\r\n</table>\r\n<p><img src=\"/media/uploads/user/fa86cfeb44700b2eafe70242f58cbd4a.jpg\" alt=\"\" width=\"1000\" height=\"584\" /></p>\r\n<p>&nbsp;</p>\r\n<p>This project is part of an ongoing residency at IRCAM and Soci&eacute;t&eacute; des Arts Technologiques (Montr&eacute;al). The first live performances are scheduled for October '19.<br /><br />In collaboration with IRCAM's RepMus team, we address several technical and scientific topics: the study of the relation between spatialized audio and immersive media (ambisonics VS fulldome and other formats), complex parameter mapping for real-time audiovisual generation systems, and the integration of versatile controllers (SenselMorph, 9 DOF sensors...). We are also developing new tools for audio and video performance in Openframeworks and AntesCollider (SuperCollider server Controlled by the Antescofo language).<br /><br />We will use this website to keep you posted about the developments of Las Pintas and share some video demos.</p>",
        "topics": [
            {
                "id": 128,
                "name": "Audiovisual",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 131,
                "name": "Fulldome",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 154,
                "name": "José miguel fernandez",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 126,
                "name": "Las pintas",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 156,
                "name": "Live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 157,
                "name": "Real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17657,
            "forum_user": {
                "id": 17653,
                "user": 17657,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7d62d79e8c4f61e0d6666168fcbd35b7?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "rapha-l",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "las-pintas",
        "pk": 234,
        "published": true,
        "publish_date": "2019-08-08T14:19:16+02:00"
    },
    {
        "title": "La Volière",
        "description": "Projet de création autour des carnets de voyage de Messiaen dans lesquels il notait les chants d'oiseaux entendus lors de ses promenades. De la même manière qu'un dessin peut montrer plus de choses qu'une photo, la retranscription à la main plutôt que l'enregistrement permet de surligner certains aspects du chant, notamment son lien avec la rythmique grecque, et gommer d'autres aspects. On reconnaît le chant original, et pourtant c'est du Messiaen : c'est la nature qui imite l'art. Les élèves ont enregistré les mêmes oiseaux que ceux notés par le compositeur afin de mettre en évidence cette subjectivité, et ont produit le matériau de création collective.",
        "content": "<h1>Le projet Voli&egrave;re</h1>\r\n<p><img src=\"/media/uploads/user/30d548da26b9da8befd5f30577c24db3.jpg\" alt=\"\" width=\"945\" height=\"503\" /></p>\r\n<p>Le projet Voli&egrave;re est une exp&eacute;rimentation &agrave; grande &eacute;chelle sur le lien entre les enseignements artistiques et la cr&eacute;ation. Il est soutenu en majeure partie par&nbsp;la <a href=\"http://bnf.fr\">Biblioth&egrave;que Nationale de France</a>, ainsi que par le Minist&egrave;re de l&rsquo;Education Nationale,&nbsp;<a href=\"https://www.psl.eu/\">l&rsquo;universit&eacute; Paris Sciences et Lettres</a>, la Direction des Affaires Culturelles de la&nbsp;Ville de Paris, le <a href=\"http://crr.paris.fr/CRR_de_Paris.html\">Conservatoire &agrave; Rayonnement R&eacute;gional de Paris</a>, et <a href=\"http://www.ensembleinter.com\">l&rsquo;Ensemble Intercontemporain</a>.</p>\r\n<p>Il fait travailler un grand nombre d&rsquo;&eacute;l&egrave;ves&nbsp;en r&eacute;seau : le Conservatoire &agrave; Rayonnement R&eacute;gional de Paris, les conservatoires des 5&egrave;, 13&egrave; et 20&egrave; arrondissements, le coll&egrave;ge Garcia Lorca de la Courneuve, le coll&egrave;ge Janson de Sailly, l&rsquo;universit&eacute; PSL, des classes de CP des &eacute;coles parisiennes.</p>\r\n<p>Les partitions r&eacute;alis&eacute;es pour le projet sont libres, il est possible de les entendre et de les t&eacute;l&eacute;charger ici dans diff&eacute;rents formats, afin par exemple de les transposer pour votre instrument :</p>\r\n<p><a href=\"https://musescore.com/user/3508521/sheetmusic\">https://musescore.com/user/3508521/sheetmusic</a></p>\r\n<h2>L&rsquo;exp&eacute;rimentation : Math&eacute;matiques, Musique, Fran&ccedil;ais</h2>\r\n<p>La Biblioth&egrave;que Nationale de France a mis en ligne sur Gallica les&nbsp;<a href=\"https://gallica.bnf.fr/ark:/12148/btv1b550099023/f8.image\">carnets dans lesquels Olivier Messiaen notait les chants d&rsquo;oiseaux</a> qu&rsquo;il entendait lors de ses promenades. Ci-dessous un exemple de support de cours &laquo;&nbsp;sonoris&eacute;&nbsp;&raquo; permettant d&rsquo;entendre le contenu d&rsquo;un carnet, d&eacute;velopp&eacute; pour les &eacute;l&egrave;ves par l&rsquo;Atelier des Feuillantines. Il s&rsquo;agit d&rsquo;une grive musicienne not&eacute;e le 4 mars 1956 &agrave; la Varenne-St-Hilaire entre 17h et&nbsp;18h.</p>\r\n<p><iframe width=\"640\" height=\"480\" src=\"https://player.vimeo.com/video/258123227\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"></iframe></p>\r\n<p>Des classes de Math&eacute;matiques fabriquent une voli&egrave;re &agrave; partir de mod&eacute;lisations de techniques de composition de Messiaen, comme les permutations sym&eacute;triques.</p>\r\n<p><strong>METHODOLOGIE DES PERMUTATIONS SYMETRIQUES :</strong></p>\r\n<p><a href=\"http://feuillantines.com/composer-sequence-de-21-secondes-audacity-employant-permutations-symetriques-de-messiaen/\">http://feuillantines.com/composer-sequence-de-21-secondes-audacity-employant-permutations-symetriques-de-messiaen/</a></p>\r\n<p>Les classes de lettres ont rep&eacute;r&eacute; des figures de rh&eacute;toriques et produisent des textes en suivant une trame narrative, les classes de musique ont enregistr&eacute; puis transcrit ces chants d&rsquo;oiseaux en employant des outils de recherche de l&rsquo;Ircam, des &eacute;l&egrave;ves des conservatoires parisiens vont jouer ces transcriptions adapt&eacute;es &agrave; leur niveau gr&acirc;ce &agrave; des outils d&rsquo;intelligence artificielle, lors d&rsquo;un stage organis&eacute; avec les musiciens de l&rsquo;ensemble Intercontemporain, et d&rsquo;une restitution &agrave; la BnF.</p>\r\n<p><strong>LIVRET D'ACCOMPAGNEMENT DU PROJET POUR LE ELEVES :</strong></p>\r\n<p><a href=\"http://feuillantines.com/reservoir/Voliere/Messiaen-livret-accompagnement-du-stage.pdf\">http://feuillantines.com/reservoir/Voliere/Messiaen-livret-accompagnement-du-stage.pdf</a></p>\r\n<div id=\"attachment_2322\" class=\"wp-caption alignnone\" style=\"width: 1196px;\"><img src=\"http://feuillantines.com/wp-content/uploads/2019/03/Math-Garcia-Lorca-Projet-Voliere.jpg\" alt=\"\" width=\"1186\" height=\"692\" /><br />\r\n<p class=\"wp-caption-text\"><em>La classe UPE2A du Coll&egrave;ge Garcia Lorca, avec Zouhir El-Amri, professeur de Math&eacute;matiques, qui explique les permutations sym&eacute;triques afin de g&eacute;n&eacute;rer le rythme de la voli&egrave;re qui sera jou&eacute;e le 8 juin.</em></p>\r\n</div>\r\n<div id=\"attachment_2323\" class=\"wp-caption alignnone\" style=\"width: 1010px;\"><img src=\"http://feuillantines.com/wp-content/uploads/2019/03/Lettres-Garcia-Lorca.jpg\" alt=\"\" /><br />\r\n<p class=\"wp-caption-text\"><em>La classe UPE2A du Coll&egrave;ge Garcia Lorca, avec M&eacute;lanie Ory, professeur de Fran&ccedil;ais, qui explique les figures de rh&eacute;toriques que l&rsquo;on peut trouver dans les chants d&rsquo;oiseaux : par exemple des anaphores.</em></p>\r\n<p class=\"wp-caption-text\"><em><img src=\"http://feuillantines.com/wp-content/uploads/2019/03/Micro-parabole-projet-voliere.jpg\" alt=\"\" /></em></p>\r\n<p class=\"wp-caption-text\"><em>Sous le contr&ocirc;le d&rsquo;une ornithologue, les &eacute;tudiants de l&rsquo;universit&eacute; Paris Sciences et Lettres enregistrent les m&ecirc;mes esp&egrave;ces que celles not&eacute;es par Messiaen lors de ses promenades.</em></p>\r\n<p class=\"wp-caption-text\"><em><img src=\"http://feuillantines.com/wp-content/uploads/2019/03/Avec-Jean-Bresson-Ircam2.jpg\" alt=\"\" /></em></p>\r\n<p class=\"wp-caption-text\"><em>Les &eacute;tudiants de l&rsquo;universit&eacute; Paris Sciences et Lettres au Conservatoire de R&eacute;gion de Paris avec Jean Bresson, de l&rsquo;Ircam, d&eacute;veloppent un logiciel dans Open Music afin de g&eacute;n&eacute;rer la notation musicale des enregistrements de chants d&rsquo;oiseaux effectu&eacute;s la veille.</em></p>\r\n<p class=\"wp-caption-text\"><em><img src=\"http://feuillantines.com/wp-content/uploads/2019/03/Avec-Jean_Bresson-Ircam.jpg\" alt=\"\" /></em></p>\r\n<p class=\"wp-caption-text\"><em>Les &eacute;tudiants de l&rsquo;universit&eacute; Paris Sciences et Lettres testant les diff&eacute;rentes notations des chants d&rsquo;oiseaux g&eacute;n&eacute;r&eacute;es par leur programme capable de tenir compte de contraintes comme le niveau de l&rsquo;instrumentiste.</em></p>\r\n<p class=\"wp-caption-text\"><em><img src=\"http://feuillantines.com/wp-content/uploads/2019/04/2019-04-13-09.34.03.jpg\" alt=\"\" /></em></p>\r\n<p class=\"wp-caption-text\"><em>Les &eacute;l&egrave;ves avec leur professeurs, &agrave; la BnF pour &eacute;tudier les carnets de Messiaen </em></p>\r\n<p class=\"wp-caption-text\"><em><img src=\"http://feuillantines.com/wp-content/uploads/2019/04/2019-04-13-11.29.43.jpg\" alt=\"\" /></em></p>\r\n<p class=\"wp-caption-text\"><em>Les conservatrices du fonds Messiaen expliquent la structure des carnets</em></p>\r\n<p class=\"wp-caption-text\"><em><img src=\"http://feuillantines.com/wp-content/uploads/2019/04/3-CRR_VoliereFG_IMG_0427m.jpg\" alt=\"\" /></em></p>\r\n<p class=\"wp-caption-text\"><em>Les r&eacute;p&eacute;titions au CRR de Paris</em></p>\r\n<p class=\"wp-caption-text\">&nbsp;</p>\r\n</div>",
        "topics": [
            {
                "id": 264,
                "name": "Canons rythmiques",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 261,
                "name": "Messiaen",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 260,
                "name": "Pédagogie",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 263,
                "name": "Permutations symétriques",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 262,
                "name": "Sound studies",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1125,
            "forum_user": {
                "id": 1124,
                "user": 1125,
                "first_name": "Fabrice",
                "last_name": "Guédy",
                "avatar": "https://forum.ircam.fr/media/avatars/Fabrice_beaux_arts.jpg",
                "avatar_url": "/media/cache/07/50/07504576893a589dc3428d4de8ebc5ba.jpg",
                "biography": "Fabrice Guédy, composer, studied conducting, piano and composition in Paris. He teaches music analysis at Université de Paris Cité, piano and theory class at Atelier des Feuillantines, a conservatory and art school where students can learn simultaneously music and visual arts.\nAfter being assistant conductor of Daniel Barenboïm at « Orchestre de Paris », he entered the music research department of Ircam, worked with Gérard Assayag and André Riotte on composition formalization and new instrumental techniques.\nHe won the « Villa Medicis hors les murs » prize, and worked at UC-Santa Barbara. He was director of « Musique Lab 2 » project at Ircam, which consisted on developing a music pedagogy environment for music schools, allowing students to work directly with Ircam’s OpenMusic environment.\nAtelier des Feuillantines won the « Impact Societal » prize from Agence Nationale de la Recherche with Ircam’s ISMM team.\nHis compositions are played by ensembles like Ensemble Intercontemporain. Among his last works are « la Volière », with EIC and students from Conservatoire de Paris, and a piano concerto created by Madoka Fukami. He has created a live coding class at Atelier des Feuillantines.",
                "date_modified": "2024-05-23T20:24:25.788413+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Fabrice-Guedy",
            "first_name": "Fabrice",
            "last_name": "Guédy",
            "bookmarks": []
        },
        "slug": "la-voliere",
        "pk": 296,
        "published": true,
        "publish_date": "2019-11-24T15:42:56+01:00"
    },
    {
        "title": "Diffuse Architectures by Emma Margetson",
        "description": "Diffuse Architectures is a spatial sound composition for the IKO spherical loudspeaker array, developed during an artistic residency with the Acoustic and Cognitive Spaces team at IRCAM-STMS, Paris.",
        "content": "<p><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p><em>Diffuse Architectures</em>&nbsp;is a spatial sound composition for the IKO - a spherical loudspeaker array - and the acoustic environment it inhabits.</p>\r\n<p>Rather than treating the physical space as a neutral container for sound, the work repositions it as an active compositional force. The piece unfolds in four connected sections -&nbsp;<em>Invocation</em>,&nbsp;<em>Listening Walls</em>,&nbsp;<em>Resonant Collapse</em>, and&nbsp;<em>Signal Drift</em>&nbsp;- tracing a journey from deliberate, focused projection toward dispersed and unpredictable resonance.</p>\r\n<p>Synthetic material is woven together with field recordings captured across a range of physical sites using Higher Order Ambisonics (HOA EM64), creating a layered sonic architecture that continuously evolves in response to the space and the audience within it. These recordings are actively transformed through the use of Spat 5 tools, including augmentation of impulse responses, to transmute perceptual space into compositional material.</p>\r\n<p>Against the idea of music as a self-contained object deposited into a room, this work proposes something more entangled. The IKO is constitutively bound to its surroundings - the space is not a vessel for the composition but part of its substance, active and transformable.</p>\r\n<p><em>Diffuse Architectures</em>&nbsp;is developed as part of an artistic residency with the&nbsp;Acoustic and Cognitive Spaces&nbsp;team at IRCAM-STMS, Paris, as part of ongoing research into material environments and the IKO loudspeaker as a novel spatial musical instrument - one whose singular relationality between sound, space, and listener offers new possibilities for the art of composition.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/EM64 Field Recording\" /></p>",
        "topics": [
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2099,
                "name": "Composition acousmatique",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4426,
                "name": "em64",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4425,
                "name": "ikoloudspeaker",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3140,
                "name": "spherical arrays",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20174,
            "forum_user": {
                "id": 20166,
                "user": 20174,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sam-Walton-emma-margetson15_copy_64KA0io.jpeg",
                "avatar_url": "/media/cache/ab/3f/ab3f68a803c68b3732a83e86f4c36ab4.jpg",
                "biography": "Emma Margetson (b.1993) is an acousmatic composer and sound artist based in the United Kingdom. Her output encompasses acousmatic composition, sound art, acousmatic performance interpretation and, occasionally, live electronic music improvisation. Her research interests include sound diffusion and spatialisation practices; site-specific works, sound walks and installations; audience development and engagement; and community music practice.\n\nHer music has been recognised and performed extensively in concerts and festivals internationally, and has been the recipient of a special mention in the Biennial Acousmatic Composition Competition Métamorphoses (Belgium, 2020), the Excellence in Sound Art & Sound Design Prize in the klingt gut! Young Artist Award (Germany, 2018), and a First prize ex æquo in the Space of Sound Spatialization Performance Competition (Belgium, 2019). \n\nEmma Margetson is a Senior Lecturer in Music and Sound and Programme Leader of the MA Music and Sound Design course at the University of Greenwich (England, UK). She is also Co-Director of the Loudspeaker Orchestra Concert Series.\n\nHer work is available on several recording labels, including empreintes DIGITALes.",
                "date_modified": "2026-03-06T08:41:06.476121+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "emmamargetson",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "diffuse-architectures-by-emma-margetson",
        "pk": 4474,
        "published": true,
        "publish_date": "2026-03-06T19:54:31+01:00"
    },
    {
        "title": "L'étrangeté perceptive en réalité virtuelle",
        "description": "Résidence en recherche artistique 2018.19.\r\nTrami NGuyen et Vincent Isnard.\r\nEn collaboration avec l'équipe Espaces acoustiques et cognitifs de l'Ircam-STMS.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\"></h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>R&eacute;sidence en recherche artistique 2018.19</h3>\r\n<p><strong>&laquo; L'&eacute;tranget&eacute; perceptive en r&eacute;alit&eacute; virtuelle &raquo;</strong><br />En collaboration avec l'&eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac/\">Espaces acoustiques et cognitifs</a><span>&nbsp;</span>de l'Ircam-STMS</p>\r\n<p>Ce projet vise &agrave; accro&icirc;tre la ma&icirc;trise de l&rsquo;information v&eacute;hicul&eacute;e dans des contenus artistiques multimodaux en r&eacute;alit&eacute; virtuelle, &agrave; l&rsquo;aide de tests perceptifs et d&rsquo;une installation interactive.</p>\r\n<p>La r&eacute;alit&eacute; virtuelle pr&eacute;sente des ressources consid&eacute;rables pour le travail artistique. Elle permet de d&eacute;passer les mediums traditionnels en proposant des environnements o&ugrave; les intentions artistiques reposent sur des normes perceptives d&eacute;cid&eacute;es par l&rsquo;artiste. Cependant, ces nouveaux outils n&eacute;cessitent une ma&icirc;trise approfondie pour reproduire l&rsquo;intention artistique de d&eacute;part. En particulier pour &eacute;galiser les taux d&rsquo;informations transmises dans les deux modalit&eacute;s visuelle et auditive et ainsi favoriser leur int&eacute;gration multisensorielle. Dans le cas contraire, un effet d&rsquo;&eacute;tranget&eacute; peut appara&icirc;tre en fonction du r&eacute;alisme produit et &agrave; cause des attentes g&eacute;n&eacute;r&eacute;es. Nous &eacute;tudierons l&rsquo;impact des taux d&rsquo;informations sur l&rsquo;int&eacute;gration de l&rsquo;objet artistique, d&rsquo;apr&egrave;s le sentiment d&rsquo;&eacute;tranget&eacute; qu&rsquo;il peut g&eacute;n&eacute;rer. Nous r&eacute;aliserons un contenu artistique repr&eacute;sentatif des r&eacute;sultats obtenus sur l&rsquo;&eacute;tranget&eacute; perceptive en r&eacute;alit&eacute; virtuelle dans une installation interactive impliquant le spectateur.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Trami NGuyen et Vincent Isnard</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202019/.thumbnails/trami_nguyen.jpg/trami_nguyen-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographies</h3>\r\n<p><strong>Trami NGuyen<br /></strong>Pianiste, compositrice, performeuse. Elle obtient deux masters en interpr&eacute;tation sp&eacute;cialis&eacute;e avec orientation soliste et en p&eacute;dagogie &agrave; la HEM de Gen&egrave;ve, et un dipl&ocirc;me de composition &eacute;lectronique dans les classes de Jean-Yves Bernhard et Jonathan Pontier. Elle a suivi par ailleurs une formation de vid&eacute;omapping avec Aur&eacute;lien Lafargue &agrave; la Ga&icirc;t&eacute; lyrique. Elle fut l'invit&eacute;e de festivals tels que les Nuits sonores de Lyon, le Festival des Cr&eacute;ations Sonores de Perpignan, la Biennale de la photographie de Mulhouse, Kultur im Rex, les Schubertiades en Suisse, Paris Quartier d'&eacute;t&eacute; (...), et des sc&egrave;nes nationales telles que la Philharmonie de Paris, l'Op&eacute;ra de Massy, Le th&eacute;&acirc;tre du Safran d'Amiens, Le Grand T de Nantes, La ferme du Buisson, L'Arsenal de Metz, le Th&eacute;&acirc;tre de Saint Quentin, La Filature de Mulhouse, Le Petit Globe et le Th&eacute;&acirc;tre de l'Echandole d'Yverdon, Ono &agrave; Bern, le Kellertheater &agrave; Murten, le Danziger50 Theater &agrave; Berlin.</p>\r\n<p>Elle est &agrave; l'origine ou int&egrave;gre de nombreux projets multidisciplinaires (cr&eacute;ation des IP improv-playgrounds / spectacles<span>&nbsp;</span><em>Dominos</em>/<span>&nbsp;</span><em>Sing the body electric</em><span>&nbsp;</span>/ des films et performance videomapp&eacute;e<span>&nbsp;</span><em>Tsuki</em><span>&nbsp;</span>et installations<span>&nbsp;</span><em>Zwei Masken statt einer</em>). Elle a co-fond&eacute; l'Ensemble Links et l'Ensemble Artefact. Son r&eacute;pertoire est classique et contemporain avec des cr&eacute;ations de compositeurs. Elle compose des musiques &eacute;lectroniques et performe seule ou en duo avec Yan Gi Cheng, en cr&eacute;ant notamment des installations performatives (<em>De Laplace &agrave; l&rsquo;endroit, Enfin dehors, the Flood Wall I, Tsuki, Mediations of black and white, Zwei Masken statt einer,</em><span>&nbsp;</span>...) Sa discographie pr&eacute;sente deux travaux :<span>&nbsp;</span><em>Contredanses</em><span>&nbsp;</span>avec l'Ensemble Artefact et<span>&nbsp;</span><em>3+3 autour de Faur&eacute;</em><span>&nbsp;</span>avec J&eacute;r&ocirc;me Berney jazz trio. Elle a r&eacute;cemment enregistr&eacute; des concerts live &agrave; Radio France dans l'&eacute;mission<span>&nbsp;</span><em>Des aventures sonores</em><span>&nbsp;</span>de Bruno Letort (<em>Body Utopia I</em>) et un solo &eacute;lectronique (<em>Body utopia II</em>) &agrave; Ineedradio &agrave; Berlin. Com&eacute;dienne et pianiste au th&eacute;&acirc;tre en 2015, dans la com&eacute;die musicale Cabaret Jaune Citron de St&eacute;phane Ly-Cuong, elle a &eacute;galement collabor&eacute; avec Etienne Pommeret dans sa pi&egrave;ce<span>&nbsp;</span><em>Bienvenue au Conseil d'administration</em><span>&nbsp;</span>cr&eacute;&eacute; au th&eacute;&acirc;tre de l'Echangeur. Elle fut r&eacute;sidente notamment &agrave; la Cit&eacute; des Arts de Paris, &agrave; l'Institut f&uuml;r alles m&ouml;gliche &agrave; Berlin, &agrave; la Villa Medicis en r&eacute;sidence courte, et &agrave; Zonadynamic Perfomance residency &agrave; Berlin</p>\r\n<p><strong>Vincent Isnard<br /></strong>Form&eacute; comme ing&eacute;nieur du son (Brest) et r&eacute;alisateur en informatique musicale (Saint-&Eacute;tienne), Vincent Isnard a poursuivi dans le domaine de la recherche en obtenant le Master en Acoustique, Traitement du signal, Informatique, Appliqu&eacute;s &agrave; la Musique (ATIAM). Il a soutenu son doctorat r&eacute;alis&eacute; &agrave; l&rsquo;Ircam en 2016, qui porte sur la reconnaissance auditive du timbre. Ses travaux scientifiques ont &eacute;t&eacute; pr&eacute;sent&eacute;s dans des revues et congr&egrave;s internationaux. Il a &eacute;galement suivi un cursus de philosophie ax&eacute; sur la perception musicale &agrave; l&rsquo;universit&eacute; de Brest, Sorbonne Universit&eacute;, et l&rsquo;&Eacute;cole Normale Sup&eacute;rieure. Enfin, ses pratiques musicales contemporaines se sont d&eacute;velopp&eacute;es dans les classes de Laurent Durupt et Denis Dufour</p>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://www.trami-nguyen.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.trami-nguyen.com/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "letrangete-perceptive-en-realite-virtuelle",
        "pk": 25,
        "published": true,
        "publish_date": "2019-03-21T16:22:26+01:00"
    },
    {
        "title": "Study No°1 for AI Ensemble",
        "description": "This is the first of a series of exploratory compositions written for a group of AI performers. The ensemble consists of 2 violins, 2 flutes, viola and cello.",
        "content": "<p>I have been working on possible applications of deep learning techniques in creation and/or performance of new music pieces since around 2017-2018. <a title=\" Recent Developments in Deep Learning and Their Possible Applications in Music\" href=\"https://medias.ircam.fr/x7c25d7\" target=\"_blank\" rel=\"noopener\">I made a presentation on the topic in March 2018, at Ircam</a>. Since then, the field has been really fast moving, and I found myself constantly trying to update my methods with cutting edge results, which resulted in me postponing making actual music with what I have learned over the years. With this piece I have finally started applying the knowledge I gathered.</p>\r\n<p>The system is based on Google Magenta's DDSP, <a title=\"Magenra/DDSP\" href=\"https://github.com/magenta/ddsp\" target=\"_blank\" rel=\"noopener\">Differentiable Digital Signal Processing</a>. The core idea behind DDSP is to treat the instrument sound as a synthesizer, instead of raw audio. Most western instruments have harmonic overtones. Even the ones that deviate from pure harmonic ratios do so by <a title=\"Audibility of Inharmonicity in String Instrument Sounds\" href=\"https://www.researchgate.net/publication/228587669_Audibility_of_Inharmonicity_in_String_Instrument_Sounds_and_Implications_to_Digital_Sound_Synthesis\" target=\"_blank\" rel=\"noopener\">a very small amount</a>. So, it makes sense to model an instrument based on a harmonic additive synthesizer, a band limited noise generator for bow sounds, breath sounds, etc., and a reverb for modeling the resonance of the instrument body. This way, the actual neural network is a recurrent one that learns to control all parameters of these three components given the pitch and amplitude of a monophonic recording of desired instrument.</p>\r\n<p>The Magenta team have released pre-trained networks for violin, flute, and a couple of other instruments. I have trained my own networks on viola and cello recordings, and I have composed a 4-minute piece for 2 flutes, 2 violins, viola, and cello, using graphic notation. The code used to generate the piece, and the accompanying video-score is freely available on <a href=\"https://github.com/kureta/film/\" target=\"_blank\" rel=\"noopener\">my GitHub page</a>, although it is a bit of a mess right now. <span style=\"font-weight: 400;\">It has two sections. First is based on increasingly deeper fractal elaborations of a simple initial motif, the second is based on stretched harmonic overtones.</span> I hope you enjoy it.</p>\r\n<p><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"//www.youtube.com/embed/LT73XTeWyMo\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>",
        "topics": [
            {
                "id": 672,
                "name": "Ddsp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 670,
                "name": "Deep learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 671,
                "name": "Rnn",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 61,
            "forum_user": {
                "id": 61,
                "user": 61,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/36752538526b328cb5c451a19a257b0d?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-15T21:13:03.953441+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "kureta",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "study-no-i-for-ai-ensemble",
        "pk": 1028,
        "published": true,
        "publish_date": "2022-01-06T11:34:09+01:00"
    },
    {
        "title": "Automated 3D Audio Control and Remixing System by Seungryeol Paik",
        "description": "This research presents a data-driven system for automated spatial audio remixing in 5th-order ambisonic formats, utilizing deep learning and machine learning for source separation, trajectory tracking, and reverberation estimation. The proposed system allows for the flexible manipulation of 3D soundfields, unlocking new possibilities for immersive media applications like VR and AR.",
        "content": "<h2></h2>\r\n<p><strong>1. Abstract</strong></p>\r\n<p>The growing demand for spatial audio in immersive media, such as virtual reality (VR) and augmented reality (AR), highlights the need for advanced tools that allow for flexible manipulation of complex sound fields. However, existing techniques for remixing and editing spatial audio&mdash;especially in high-resolution formats like 5th-order Higher-Order Ambisonics (HOA)&mdash;remain technically challenging.</p>\r\n<p>This ongoing research proposes the development of a comprehensive system leveraging deep learning (DL) and machine learning (ML) for source separation, trajectory tracking, and reverberation estimation within 5th-order ambisonic audio environments. The system aims to provide seamless manipulation of individual sound sources (from mono up to 5th-order ambisonics), spatial trajectories, and environmental reverberations, enabling the flexible exchange, removal, or addition of specific audio elements across different spatial mixes.</p>\r\n<p>By offering detailed control over sound sources and their movements within a 3D soundfield, this system opens up new possibilities for spatial audio remixing. The ultimate goal is to develop a system that is both automated and adaptable, capable of addressing the complex audio needs of VR, AR, and other immersive media applications.</p>\r\n<p>Current progress includes the creation of a multi-format dataset containing mono, ambisonic, and XYZ trajectory data, alongside the ongoing development of multichannel source separation models. These advancements pave the way for an efficient and intuitive system for spatial audio editing.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>2. Background and Motivation</strong></p>\r\n<p>Spatial audio, particularly ambisonic audio, plays a vital role in immersive media such as VR, AR, gaming, and film production. Higher-order ambisonics, like 5th-order ambisonics, provide high-resolution, 3D soundscapes that greatly enhance immersion. Although ambisonics can be adapted into different formats using spherical harmonics and open-source tools, their manipulation remains challenging, often requiring specialized equipment and advanced digital audio workstations (DAWs).</p>\r\n<p>While the video industry has made great strides in object replacement, background compositing, and seamless scene manipulation through ML/DL-powered tools, audio editing has not reached the same level of flexibility. Spatial audio remixing and sound source manipulation, such as adding, removing, or exchanging sources, trajectories, or reverberations, remain far more complex, especially when dealing with ambisonic audio in VR/AR applications. The need for specialized equipment and software complicates the process, limiting creative possibilities for audio engineers and content creators.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/242baf7d22b45ac50c6312bfdae5d189.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><strong>3. Research Objectives</strong></p>\r\n<p>The primary goal of this research is to develop an automated system for 5th-order ambisonic audio remixing, using a data-driven approach to enable sound source separation, trajectory tracking, and reverberation adjustments. These controls will allow tasks such as removing, pasting, swapping, or modifying sound sources, changing their position and movement, and adjusting or removing reverberation. The specific objectives are:</p>\r\n<ul>\r\n<li>\r\n<p><strong>Source Separation</strong>: Develop a deep learning model that accurately identifies and separates multiple sound sources from a 5th-order ambisonic mix, outputting them as dry mono sources for easy manipulation. This provides precise control over individual elements within the mix.</p>\r\n</li>\r\n<li>\r\n<p><strong>Trajectory Tracking</strong>: Utilize machine learning techniques to extract and modify the<strong> </strong>3D spatial trajectories (x, y, z coordinates) of the separated sound sources, enabling precise control over their movement within the soundfield.</p>\r\n</li>\r\n<li>\r\n<p><strong>Reverberation Modeling (RIR Extraction)</strong>: Analyze and model<strong> </strong>room impulse responses (RIR) to accurately capture environmental acoustics, allowing for realistic reverberation control during the remixing process.</p>\r\n</li>\r\n<li>\r\n<p><strong>Spatial Remixing</strong>: Develop a framework that supports remixing tasks such as swapping sound sources between ambisonic mixes, modifying their trajectories, or adjusting reverberations, all while maintaining the spatial integrity of the original soundfield.</p>\r\n</li>\r\n</ul>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c232b4ca1e81a09ac25c784acb5a248a.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><strong>4. System Components<br /></strong></p>\r\n<p>The proposed system comprises the following components:</p>\r\n<ul>\r\n<li><strong>Source Separation Module</strong>: A deep learning model that separates sound sources from the input ambisonic mix, outputting them as individual mono channels.</li>\r\n<li><strong>Trajectory Tracking Module</strong>: Uses machine learning to track and modify the<strong> </strong>3D spatial trajectories (x, y, z coordinates) of sound sources for remixing.</li>\r\n<li><strong>RIR Extraction Module</strong>: Estimates room impulse responses to model the acoustic characteristics of the environment, allowing realistic reverberation control.</li>\r\n<li><strong>Spatial Remixing Framework</strong>: Provides tools to swap, move, or modify sound sources and backgrounds based on extracted data, enabling flexible spatial remixing.</li>\r\n<li><strong>Ambisonic Re-encoding Component</strong>: Re-encodes the remixed audio into a 5th-order ambisonic format to ensure spatial accuracy and VR/AR compatibility.</li>\r\n</ul>\r\n<p>&nbsp;</p>\r\n<p><strong>5.&nbsp;Expected Benefits<br /></strong></p>\r\n<p>The outcomes of this research will offer key benefits such as:</p>\r\n<ul>\r\n<li>Seamless control of various audio formats, from mono to 5th-order ambisonics, within a single system&mdash;eliminating the need for ambisonic microphones and enabling comprehensive spatial audio control.</li>\r\n<li>Automated and efficient source separation for complex ambisonic audio mixes, simplifying the process of isolating sound sources.</li>\r\n<li>Accurate trajectory tracking in 3D space, allowing for precise and dynamic manipulation of sound sources within the soundfield.</li>\r\n<li>Enhanced spatial remixing tools that enable easy swapping, adjusting, and editing of sound sources and backgrounds.</li>\r\n<li>Improved accessibility for audio engineers, VR/AR developers, and content creators by reducing reliance on separate modules, specialized hardware, or complex software.</li>\r\n<li>Broader applications in immersive media, including VR, AR, gaming, and cinematic experiences, where precise spatial audio placement and manipulation are essential.</li>\r\n</ul>\r\n<p>&nbsp;</p>\r\n<p><strong>6.&nbsp;Current Progress<br /></strong></p>\r\n<p>The project is currently ongoing, with the following tasks completed and in progress:</p>\r\n<ul>\r\n<li>Dataset Development: A comprehensive dataset containing mono sources, 5th-order ambisonic recordings, and corresponding 3D spatial trajectory pair data has been completed. This dataset is derived from the author's exhibition and music projects, comprising 110 tracks featuring a variety of sounds, including soundscapes and vocals. The dataset, titled 'AMBISONIC-DML: A Higher-Order Ambisonics Audio Dataset for Spatial Audio Research and Creative Applications', has been submitted to ICASSP 2025 for review.</li>\r\n<li>Multichannel Source Separation: We are currently working on implementing a multichannel source separation model capable of handling complex ambisonic mixes. This involves training deep learning models to effectively isolate individual sound sources within the ambisonic soundfield, enhancing the separation capabilities for spatial audio remixing.</li>\r\n</ul>",
        "topics": [
            {
                "id": 2342,
                "name": "3d audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 670,
                "name": "Deep learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2341,
                "name": "immersive audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 83128,
            "forum_user": {
                "id": 83027,
                "user": 83128,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_5624.jpg",
                "avatar_url": "/media/cache/d6/6c/d66c33c5e8e2f8234c5e426074734235.jpg",
                "biography": "PhD researcher at Seoul National University under the of Prof. Kyogu Lee, focusing on deep learning for music and 3D audio processing. Completed undergraduate studies in Political Science and French Literature at Yonsei University in Seoul and the University of Lausanne in Switzerland.\n \nDeveloped a multidisciplinary approach to audio, technology, and the humanities. Bassist of the indie band 'Band Nah', as well as a sound designer and audio engineer. Contributed to various exhibitions and performances, shaping unique soundscapes. Current research focuses on creating 3D audio works and developing end-to-end deep learning models for musicians and audio engineers in both stereo and spatial audio environments.\n\nPublished work related to music, audio, and technology in various conferences and competitions, including winning a gold medal at the IEM Student 3D Audio Production Competition (S3DAPC) and publishing papers at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) and Audio for Virtual and Augmented Reality (AVAR).",
                "date_modified": "2024-11-06T08:47:25.868491+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 935,
                        "forum_user": 83027,
                        "date_start": "2024-09-25",
                        "date_end": "2025-09-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 569,
                                "membership": 935
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    },
                    {
                        "id": 919,
                        "forum_user": 83027,
                        "date_start": "2024-09-09",
                        "date_end": "2025-09-09",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "paik402",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "automated-3d-audio-control-and-remixing-system-1",
        "pk": 3064,
        "published": true,
        "publish_date": "2024-10-23T12:30:10+02:00"
    },
    {
        "title": "Modeling variations in onsets and dynamics in music performances by Pablo Alvarado",
        "description": "Abstract of the poster: Modeling Variations in Onsets and Dynamics in Music Performances, by Pablo Alvarado. To be presented at the IRCAM Forum Workshops, Paris, 26-28 March 2025.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p></p>\r\n<p><img src=\"https://forum.ircam.fr/media/uploads/gp_regression_2.jpg\" alt=\"\" width=\"1493\" height=\"533\" /></p>\r\n<p></p>\r\n<p>Presented by : Pablo Alejandro Alvarado&nbsp;</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/pabloalvarado/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>This poster explores the modeling of variations in onset location and dynamics in music performances.</p>\r\n<p>By analyzing audio recordings, this study aims to uncover the underlying patterns that characterize the rhythmic and temporal features of diverse music styles.<br />The deviations of each music note onset is modeled as a stochastic function, specifically, as a Gaussian process. Using Bayesian inference it is possible to get posterior distribution over this function, accurately capturing and representing the variations in performance.</p>\r\n<p>Additionally, this research aims to integrate generative models to<span>&nbsp;</span>create realistic backing tracks that maintain the stylistic rhythm features of human-like<span>&nbsp;</span>music interpretations, facilitating improvisation in an informed context.</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 87694,
            "forum_user": {
                "id": 87590,
                "user": 87694,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PAAD_2024.jpg",
                "avatar_url": "/media/cache/66/d5/66d57bdc51eeb79069968f690b0a3d81.jpg",
                "biography": "I am an Electronic Engineer from Universidad Tecnológica de Pereira, Colombia, with a\nPhD in Computer Science from the Centre for Digital Music (C4DM) at Queen Mary\nUniversity of London. I have expertise in sound processing, particularly in applying data\nscience methods such as Gaussian Processes and Bayesian Inference to multi-pitch\ndetection and automatic music transcription tasks.\n\nCurrently, I am pursuing a bachelor’s degree in Music (BMus) with emphasis on classical\nguitar, at Universidad de Antioquia, Colombia. In addition to my studies, I work as an\nadjunct teacher, focusing on frequency analysis of electric circuits at the same\nuniversity. At present, my research focuses on modeling variations in tempo, beat, and\nonsets in musical performances. The goal is to learn the characteristic patterns of\nrhythmic expressivity from audio recordings of music performances, and integrate\nthese patterns with generative models, in order to create more realistic backing tracks\nfor improvisation.",
                "date_modified": "2026-01-10T21:41:41.233021+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "pabloalvarado",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3279,
                    "user": 87694,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "modeling-variations-in-onsets-and-dynamics-in-music-performances-by-pablo-alvarado",
        "pk": 3279,
        "published": true,
        "publish_date": "2025-02-11T21:41:00+01:00"
    },
    {
        "title": "Navigating Precision and Liveliness in Artificial and Human Voice Drones with Ensemble Ikosikaihenagone by Benjamin Duboc and Diemo Schwarz",
        "description": "Benjamin Duboc (composition, double bass) and Diemo Schwarz (computer music design, electronics)",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<p></p>\r\n<p><img src=\"/media/uploads/oratorio-patch.jpg\" alt=\"\" width=\"3248\" height=\"2112\" /></p>\r\n<p>Presented by&nbsp;Benjamin Duboc and Diemo Schwarz</p>\r\n<p>Ensemble Ikosikaihenagone of 21 musicians was founded by Benjamin Duboc in 2021. We will show how interactive synthesis techniques built with the MuBu framework for Max can be integrated into two comprovisation pieces to create a musical context that navigates between precision and liveness. In the piece Volumes II, additive synthesis with fine control of harmonics creates an immersive sonic bed for the acoustic instruments that can oscillate between stable and moving states. In the piece Oratorio, Mubu&rsquo;s granular and PSOLA resynthesis is used to freeze the spoken voice of the musicians in order to create infinite chords retaining the identity of each individual voice, and allowing fine control of mix, micro-tuning and spatialisation.</p>",
        "topics": [],
        "user": {
            "pk": 36,
            "forum_user": {
                "id": 36,
                "user": 36,
                "first_name": "Diemo",
                "last_name": "Schwarz",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9bf9105c2fbdb55023f9437ac99a6630?s=120&d=retro",
                "biography": "Diemo Schwarz is a researcher at IRCAM, and a musician and creative programmer. He performs on his own digital musical instrument based on his CataRT open source software, exploring different collections of sound with the help of gestural controllers that reconquer musical expressiveness and physicality for the digital instrument, bringing back the immediacy of embodied musical interaction to the rich sound worlds of digital sound processing and synthesis.\nHe interprets and performs improvised electronic music as member of the ONCEIM improvisers orchestra, ensemble Ikosikaihenagone, and various other musicians, and he composes for dance and performance, video, and installation.\nHis scientific research on sound analysis/synthesis and gestural control of interaction with music is the basis of his artistic work, and allows to bring advanced and fun musical interaction to expert musicians and the general public.\nIn 2017 he was DAAD Edgar-Varèse guest professor for computer music at TU Berlin, and in 2022 artist in residence in the Arts, Sciences, Societies fellowship program of IMéRA institute of advanced studies, Aix–Marseille Université.",
                "date_modified": "2026-02-24T12:21:32.536216+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 397,
                        "forum_user": 36,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-19",
                        "type": 0,
                        "keys": [
                            {
                                "id": 7,
                                "membership": 397
                            },
                            {
                                "id": 9,
                                "membership": 397
                            },
                            {
                                "id": 13,
                                "membership": 397
                            },
                            {
                                "id": 21,
                                "membership": 397
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "schwarz",
            "first_name": "Diemo",
            "last_name": "Schwarz",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 329,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 257,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 496,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 38,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 1045,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 299,
                    "user": 36,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "navigating-precision-and-liveliness-in-artificial-and-human-voice-drones-with-ensemble-ikosikaihenagone-by-benjamin-duboc-and-diemo-schwarz",
        "pk": 3367,
        "published": true,
        "publish_date": "2025-03-20T11:24:12+01:00"
    },
    {
        "title": "NewImages Hub - Michele Ziegler",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p><span>Michele Ziegler, Chief Digital Officer and Director of the NewImages Festival, will present the Forum des images' new support system for creators: NewImages Hub. The program includes a festival, international and French residencies, workshops and personalized support</span></p>",
        "topics": [],
        "user": {
            "pk": 31229,
            "forum_user": {
                "id": 31182,
                "user": 31229,
                "first_name": "Tom",
                "last_name": "Debrito",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d239346e0c19ec2b960555378b5fe912?s=120&d=retro",
                "biography": "Tom Debrito was the Events Coordination Manager of the IRCAM Forum for the year 2022-2023, as part of a work-study contract.\n\nHe was in charge of the coordination of the Forum Workshops 2022 with the New York University, the Forum Workshops 2023 in Paris and the Forum Workshops 2023 in Taipei in collaboration with the C-LAB. In addition, he handles communication and marketing related tasks to help the development of the IRCAM Forum.",
                "date_modified": "2023-10-30T12:25:43.859854+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 389,
                        "forum_user": 31182,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "debrito",
            "first_name": "Tom",
            "last_name": "Debrito",
            "bookmarks": []
        },
        "slug": "newimages-festival-michele-ziegler",
        "pk": 2137,
        "published": true,
        "publish_date": "2023-03-14T14:50:45+01:00"
    },
    {
        "title": "REACH:Forum in March : updates on software REACH:suite and perspectives",
        "description": "The REACH research group in IRCAM Music Representations team has created the software suite REACH:suite already comprising of Omax, Somax, Djazz, and Dicy2.\r\n\r\nAt the Forum Workshops in March, REACH will present the whole suite as an ecosystem, show novelties and give a sense of ongoing research that will shape the next innovations in the suite.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<h3><span><b><span>REACHing Forum Session</span></b></span></h3>\r\n<p><span><b>Updates in the Suite:</b></span></p>\r\n<ul>\r\n<li><span><b>OMax 5</b>, first time ever released in the forum, this is premium modular version of Omax, with the possibilities to run N models of the corpus in parallel, with omax weaving its path into the best solutions</span></li>\r\n<li><span><b>Djazz 2.0</b>, first time ever released in the Forum, the impro software that lets you express compositional and beat/pulse oriented scenarios</span></li>\r\n<li><span><b>Somax 2.</b>7 new release with new audio descriptors for timbre (MFCC) a brand new label system that lets you tag the corpus the way you like (e.g. new instrumental techniques, musical gestures, linguistic features, etc.), multi-label corpus building and several functional and UI ameliorations.</span></li>\r\n<li><span><b>Dicy2 3.x</b>&nbsp;: new release, Max 9 compatibility, new MuBu version, bug fixes in external synchronisation, handling of MIDI files by drag&amp;drop</span></li>\r\n</ul>\r\n<p><span><b>New software in the suite:</b></span></p>\r\n<p><span><b>Somax2Collider 1.0&nbsp;</b>: new version of Somax with SuperCollider as the front-end control language (instead of Max), with applications e.g. in scenarios involving a great number of agents and inclusion of spatial knowledge and collective spatial dynamics for AI agents.</span></p>\r\n<p><span><b>Research &amp; Innovation :</b></span></p>\r\n<ul>\r\n<li><span><b>SoVo</b>: convergence between Somax2 world and George Lewis&rsquo; Voyager world, agent cooperation between corpus based and machine-learning based generativity, and interaction strategies / shape forming dynamics (G. Lewis, G. Assayag, M. Fiorini, D. Holzborn)</span></li>\r\n<li><span><b>ipt~</b>&nbsp;: machine listening of Instrumental Playing Techniques using deep learning, interfacing with the REACH:Suite (N. Brochec, J.Borg, M. Fiorini)</span></li>\r\n<li><span><b>SpeechMax&nbsp;</b>: tools for speech segmentation and prosody, integration to the REACH:Suite (M. Malt, G. Bloch)</span></li>\r\n<li><span><b>SpaceSynth</b>&nbsp;: extension of corpus based generative interaction in REACH:Suite to 3D agents space and sound synthesis &nbsp;(J.M. Fernandez, A. Gatti, M. Fiorini)</span></li>\r\n<li><span>&hellip;</span>​​​</li>\r\n</ul>\r\n<p><b>REACH:suite</b><span>&nbsp;people: G&eacute;rard Assayag (REACH:suite &amp; co), Joakim Borg (Somax), Georges Bloch (Omax, Somax), Daniel Brown (Djazz), Marc Chemillier (Djazz), Jose-Miguel Fernandez (Somax2Collider), Marco Fiorini (Somax), Mikhail Malt (Omax, Somax), J&eacute;r&ocirc;me Nika (Dicy2). Legacy : &nbsp;Laurent Bonnasse-Gahot (somax),&nbsp;</span><span>Axel Chemla-Romeu-Santos (somax),&nbsp;</span><span>Benjamin L&eacute;vy, (omax).</span><span>&nbsp;</span><span>We use&nbsp;MuBu, MaxSoundBox by the ISMM Team, Zsa by Mikhail Malt and Emmanuel Jourdan, SylSeg by Nicolas Obin and ISMM team, SpaceSynth contributions by A. Gatti (PDS team).</span></p>\r\n<p><span style=\"text-decoration: underline;\"><strong>Date :</strong></span></p>\r\n<p><span>March 27th, 16:00 - 18:00 (Studio 5)</span></p>\r\n<table width=\"969\" height=\"227\">\r\n<tbody>\r\n<tr>\r\n<td><a href=\"http://recherche.ircam.fr/equipes/repmus/OMax/\" target=\"_blank\"><img src=\"/media/uploads/omax_2.jpg\" alt=\"\" width=\"240\" height=\"148\" /></a></td>\r\n<td><a href=\"https://forum.ircam.fr/projects/detail/somax-2/\" target=\"_blank\"><img src=\"/media/uploads/somax.jpeg\" alt=\"\" width=\"240\" /></a></td>\r\n<td><a href=\"https://forum.ircam.fr/projects/detail/dicy2/\" target=\"_blank\"><img src=\"/media/uploads/dyci2.png\" alt=\"\" width=\"240\" /></a></td>\r\n<td><a href=\"https://digitaljazz.fr/research/\" target=\"_blank\"><img src=\"/media/uploads/djazz.png\" alt=\"\" width=\"240\" /></a></td>\r\n</tr>\r\n</tbody>\r\n</table>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "reachforum-in-march-updates-on-software-reachsuite-and-perspectives",
        "pk": 3267,
        "published": true,
        "publish_date": "2025-02-10T17:42:59+01:00"
    },
    {
        "title": "ACIDS 360 by Nils Demerlé and David Genova",
        "description": "A tour of the latest tools developed within the ACIDS research group, ranging from cutting edge generative systems to interfaces and embedded device platforms.",
        "content": "<div><strong> </strong></div>\r\n<div><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></div>\r\n<p></p>\r\n<div></div>\r\n<div><strong>ACIDS 360 : Embedded neural audio synthesis tools, controls, and bending.</strong></div>\r\n<div>\r\n<p></p>\r\n<p>The ACIDS research group is dedicated to crafting neural audio synthesis tools for musical production and performance. Over the past years, the group has developed a range of technologies for timbre transfer, controllable instrument generation, network compression, analysis, and model bending.</p>\r\n<p>This presentation introduces the underlying principles and motivations behind these developments, bridging technical, musical, and scientific perspectives. It explores the new tools and creative possibilities opened up by neural audio synthesis, while presenting recent updates to existing systems and highlighting the latest research and artistic collaborations that have led to new approaches.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/feb3d315397c7821eae9ec501d843894.png\" /></p>\r\n<p>Latest collaborations : Canblaster &amp; Neurotipyque @ Marathon festival, Pierce Warnecke @ Ircam Forum Session, Mol&eacute;cule @ Ircam.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Technologies</strong></p>\r\n</div>\r\n<div>\r\n<div>\r\n<p><strong>Ravetable</strong> ~ What if latent signals were treated as audio signals? Ravetable explores this idea to build a latent-based wavetable synthesizer built on top of the RAVE model, enabling time-synchronised, zero-latency neural audio synthesis.</p>\r\n<p><img alt=\"ravetable_small.png\" src=\"https://github.com/acids-ircam/ravetable/blob/main/figures/ravetable_small.png?raw=true\" /></p>\r\n<p>&nbsp;</p>\r\n<p><strong>AFTER</strong> ~ AFTER combines the versatility and expressivity of latent based neural synthesis with accurate control over known musical features such as melodic content. This presention will unfold the latest developement to the AFTER system and Max4Live device, with new synthesis paradigms such as descriptor based synthesis and bpm clock conditonned drum break generation.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/62cf3342816d2b47cb10c65ef686ecab.png\" /></p>\r\n<p><strong>TorchBend</strong> ~ Inspired by circuit bending, Torchbend allows to manipulate the internal representations learned by neural networks in order to deviate, push, or break the synthesis process and create entirely new soundscapes.</p>\r\n<p><strong>JUNK&nbsp;</strong>~ Junk is real-time hardware instrument built around a Raspberry Pi with dedicated MIDI controls that can host nn~ models such as RAVE.&nbsp;</p>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 326,
                "name": "Control",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1774,
                "name": "neural synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 329,
                "name": "Signal processing",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 70829,
            "forum_user": {
                "id": 70756,
                "user": 70829,
                "first_name": "Nils",
                "last_name": "Demerle",
                "avatar": "https://forum.ircam.fr/media/avatars/Nils2.jpeg",
                "avatar_url": "/media/cache/a0/b1/a0b10261c08fafb142fc5599a575e0e2.jpg",
                "biography": "Former Phd student of Ircam's Analysis Synthesis team, member of the acids research group.\nMy work focuses on controllable neural audio synthesis with diffusion models and representation learning of musical audio signals.",
                "date_modified": "2026-02-17T15:21:41.601817+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "demerle",
            "first_name": "Nils",
            "last_name": "Demerle",
            "bookmarks": []
        },
        "slug": "acids-360-by-nils-demerle-and-david-genova",
        "pk": 4377,
        "published": true,
        "publish_date": "2026-02-17T15:59:44+01:00"
    },
    {
        "title": "Sound, data, and the creation of the virtual space",
        "description": "With the advancement of AI, technologies like Muse and ACE are transforming the creation of virtual worlds. This raises the question: How will AI affect the role of the sound designer?",
        "content": "<p>This 2025 began with a shift in the tech industry: the arrival of Muse, in partnership with Microsoft, and the debut of ACE with Nvidia and Xbox; two generative AIs capable of building interactive virtual environments in video games. With this announcement, artificial intelligence moved beyond simple image and video generation&mdash;which had already shown signs of creative exhaustion&mdash;to fully dive into the creation of complete worlds, with their own rules and landscapes in constant transformation. And here comes the inevitable question:<br /><br /></p>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<h3 style=\"text-align: center;\"><em><strong>How do these new generative virtual worlds sound?<br /><br /></strong></em></h3>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae85b1b\">\r\n<p>From the tech industry, we are being trained not only to inhabit virtual environments that are increasingly similar to reality, but also to understand how they are created.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae85e1c\">\r\n<p>These new technologies build virtual landscapes by shaping our relationship with the immediate environment, and in that process, they force us to rethink the role that immersive experience designers are starting to play.<br /><br /></p>\r\n<h2>The Invisible Architecture</h2>\r\n<p>Sound has always been an invisible architecture, a language that speaks directly to the body. The reverberation of an underground tunnel tells us more about its depth than any 4K render.</p>\r\n<p>&nbsp;</p>\r\n<blockquote>\r\n<div style=\"text-align: center;\">&ldquo;The echo of footsteps on a paved street carries an emotional weight because the sound that reverberates from the surrounding walls places us in direct relation to the space.&rdquo;</div>\r\n<div>\r\n<div>\r\n<div>\r\n<div style=\"text-align: center;\">Juhani Pallasmaa</div>\r\n</div>\r\n</div>\r\n</div>\r\n</blockquote>\r\n<p>&nbsp;</p>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae863cc\">\r\n<p>The crunch of gravel underfoot, the density of fog on an empty street, even the emptiness of artificial silence&mdash;these are all elements traditionally designed by human hands, with human decisions. But what happens when these landscapes begin to generate themselves? When sound stops being composed and starts being predicted?</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8665b\">\r\n<p>A reverberation tells us the depth of a tunnel before we see it; the directionality of a sound makes us turn our heads in a dark room.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8695f\">\r\n<p>If sound has always connected us to space intuitively, technology now redefines that relationship.</p>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae86c51\">\r\n<p>A few years ago, I recorded a radio program inside the South Water Tower in Germany (Wasserturm S&uuml;d in Halle, Saale).</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae86f4e\">\r\n<p>The tower was an immense cylinder, empty, a shell of concrete and brick. That experience shaped my understanding of reverberations. In a large circular space, a voice, a bandoneon, and a clarinet transform an empty, semi-abandoned space into a place full of life.<br /><br /><a href=\"https://youtu.be/yFnnVxH1O3Y\" title=\"Wasserturm - Radio Art Residency 2018 - Radio Corax\">https://youtu.be/yFnnVxH1O3Y</a></p>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae87201\">\r\n<p>I wonder what will happen when the acoustics of a space stop being a physical consequence and become an algorithmic decision.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae874c4\">\r\n<p>When an AI determines the exact duration of a reverberation or the sonic depth of a tunnel before someone walks through it, will we still perceive the space in the same way? Or will we adapt to a sound architecture that no longer responds to our presence, but rather to a calculated logic about how a space sounds</p>\r\n<p>&nbsp;</p>\r\n<h2>The Future Sounds Like Data</h2>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae87774\">\r\n<p>But the video game industry not only views this advancement with enthusiasm but also with concern. WHAM (World and Human Action Model), the technology powering Muse, is already demonstrating that prediction and real-time generation of environments are the new frontier.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae879b1\">\r\n<p>From just one second of human gameplay&mdash;equivalent to ten frames&mdash;it is capable of predicting the evolution of a video game session.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae87bb1\">\r\n<p>If Muse creates worlds and WHAM anticipates them, it&rsquo;s only a matter of time before similar systems also handle the organization of sound in space.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae87dd0\">\r\n<p>AI will not only be able to generate sounds in real time, but also decide where to place them, how they behave in relation to the environment, and how they interact with the user.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae88066\">\r\n<p>The technological evolution of audio spatialization, from stereophony to object-based audio, has transformed the way we experience sound in a space; but AI could take it one step further: a sound space that constantly adapts to the user&rsquo;s position, speed, visual environment, and even their emotional state.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae882ca\">\r\n<p>We will no longer talk about sound designers manually programming each effect, but about systems trained to generate complete sound environments.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae885c8\">\r\n<p>But it&rsquo;s not just about creating sounds; it&rsquo;s about defining the rules by which they will be distributed in the virtual space.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae888d0\">\r\n<p>If AI learns to generate sound environments, the question is: what data will it use to train itself? If it is only fed conventional soundscapes, it could end up replicating a static world, lacking the diversity and organic nature of real sound. AI does not improvise or dream; it only replicates what we give it.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae88bc4\">\r\n<p>If we train a system with recordings of urban spaces that reflect only hegemonic cultures, they could end up being an artificial echo of the great powers and a poor representation of more marginalized regions.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae88e1d\">\r\n<p>If the database comes exclusively from cinematic recordings or sound libraries already available on the internet, we will lose the richness of spontaneous and natural sound.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae89020\">\r\n<p>The way we curate these datasets will determine the type of landscapes that will be built in the future.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae89210\">\r\n<p>But who will be the next curators of these data banks? How do we prepare for these new models in which art is conditioned by AI? Will users truly notice the difference?</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae893fb\">\r\n<p>The video game industry has already split into two clear stances. On one side, there are those who see the new generative AIs as a technological tool that streamlines and perfects world creation, like Brendan Greene, the person responsible for popularizing the battle royale genre, from which games like Fortnite emerged.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae895da\">\r\n<p>On the other side, there are those who believe AI will replace workers in the industry with models lacking creativity and that players will notice this change. In this group are the creators of the No GEN AI label, a collective of independent developers who have designed a logo for studios to display on digital store pages, indicating that no generative AI was used in the creation of the video game.<br /><br /></p>\r\n<h2>The Great Challenge: Between Creation and Curation</h2>\r\n<h3 style=\"text-align: center;\"><em>Will we continue designing sounds, or will we start curating sound data for AI?</em><br /><br /></h3>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae89998\">\r\n<p>From my point of view, the sound designer of the future will no longer compose an atmosphere from scratch; instead, they will select, refine, and model datasets of millions of sounds, thus training a system that will create sounds in real time.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae89bf7\">\r\n<p>Like a digital archaeologist, they won&rsquo;t create as much as they will direct the machine&rsquo;s possibilities. Their task will be to choose the sounds, define their characteristics, and establish the parameters with which AI will operate within an immersive environment.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae89ebd\">\r\n<p>If the role of the sound designer increasingly leans toward data management and model training, our way of perceiving space will change completely.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8a1a5\">\r\n<p>It&rsquo;s not just about adapting but deciding whether we want the sound of the future to be a mere replica of what we already know or an opportunity to reimagine our relationship with listening, both in virtual and physical spaces.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8a4a1\">\r\n<p>I am not against AI, but I don&rsquo;t see it as an absolute solution either. I believe it is essential to understand its implications, explore its possibilities, and question its limits.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8a7a5\">\r\n<p>The answers to these questions might give us the chance to find possible gaps in a system that will determine how we conceive immediate reality.</p>\r\n<p>&nbsp;</p>\r\n<br />\r\n<div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8aa87\">\r\n<h5>References and Resources</h5>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8ad83\">\r\n<p><a href=\"https://www.youtube.com/watch?v=YBJEiWDPyGs\">- Announcing GeForce RTX 50 Series | CES 2025 Keynote from CEO Jensen Huang</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8b098\">\r\n<p><a href=\"https://www.rollingstone.com/culture/rs-gaming/playerunknown-artemis-prologue-impressions-1235272629/\">- The Creator of Battle Royales Wants to Reshape Gaming &mdash; Again</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8b398\">\r\n<p><a href=\"https://www.nvidia.com/en-us/geforce/news/g-assist-ai-companion-for-rtx-ai-pcs/\">- NVIDIA Redefines Game AI With ACE Autonomous Game Characters</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8b698\">\r\n<p><a href=\"https://www.npr.org/2024/08/02/1198912993/video-games-ai-artificial-intelligence-warner-bros-strike\">- Video game performers are on strike &mdash; and AI is the sticking point</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8b99b\">\r\n<p><a href=\"https://www.wired.com/story/the-prompt-ethical-generative-ai-does-not-exist/#intcid=_wired-article-bottom-recirc_099be05e-5e32-45c3-b915-b01938af775a_roberta-similarity1\">- Xbox Pushes Ahead With New Generative AI. Developers Say &lsquo;Nobody Will Want This&rsquo;</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8bc90\">\r\n<p><a href=\"https://www.wired.com/story/the-prompt-ethical-generative-ai-does-not-exist/#intcid=_wired-article-bottom-recirc_099be05e-5e32-45c3-b915-b01938af775a_roberta-similarity1\">- I&rsquo;m Not Convinced Ethical Generative AI Currently Exists</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8bf60\">\r\n<p><a href=\"https://www.polygon-treehouse.com/no-gen-ai-seal\">- The No Gen AI Seal</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n<div>\r\n<div>\r\n<div id=\"ld-fancy-heading-67d987ae8c24f\">\r\n<p><a href=\"https://www.nature.com/articles/s41586-025-08600-3\">- World and Human Action Models towards gameplay ideation</a></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p>&nbsp;</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div id=\"gtx-trans\">\r\n<div>&nbsp;</div>\r\n</div>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 70,
                "name": "Audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 232,
                "name": "Audio 3d",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 272,
                "name": "Generative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1945,
                "name": "generative ai",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2648,
                "name": "generative audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2766,
                "name": "Next Audio Generation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 628,
                "name": "Video gaming ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 29278,
            "forum_user": {
                "id": 29250,
                "user": 29278,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sol-Rezza-05-2024-214x300.jpg",
                "avatar_url": "/media/cache/b1/29/b12985af83e892ca90cecaaaf693b3b9.jpg",
                "biography": "Sol Rezza is an Argentinian composer, sound designer and audio engineer. Her practice incorporates experimental electronics with spatial audio to create immersive experiences for virtual ecosystems and live performances.\nCombine multilingual voice samples, granular synthesis and sequencers with open-source multichannel audio technology like the SoundSquares plug-in.\nCurrently, she is developing research on how new technologies (AI, machine learning, VR, etc.) influence the creation and production of contemporary storytelling.\nRezza's work has been shown at MUTEK Montreal (CA), MUTEK (AR/ES), CTM Festival (DE), IN/OUT Festival, Tsonami Festival (CL), BRIWF festival (BR), Simultan Festival (RO), Borealis Festival (NO), HÖRLURS Festival (SE), among others. She participated in artist residencies including the Radio Art Residency at Radio Corax (DE) Somerset House Studios Residency (UK) and Binaural Nodar Residency (PT).",
                "date_modified": "2026-02-05T19:19:13.352241+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "solrezza",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 104,
                    "user": 29278,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "sound-data-and-the-creation-of-the-virtual-space",
        "pk": 3362,
        "published": true,
        "publish_date": "2025-03-18T16:24:43+01:00"
    },
    {
        "title": "Inform and evaluate a public space sound installation through perceptual evaluations, an art x science collaboration. (Niches Acoustiques II)",
        "description": "NYU Ircam Forum 2022 contribution by Valérian Fraisse, Nadine Schütz and Nicolas Misdariis.\nAssociated Article: Niches Acoustiques: urban soundscape design, or, composing (with) the sonic landscape of a public square in Paris. (Niches Acoustiques I)",
        "content": "<p><img alt=\"View from the forecourt towards the courdhouse (Tribunal Judiciaire) and the Maison des Avocats.\" src=\"/media/uploads/user/b667b37bfbb42886bf9cd0a73f19c44d.jpg\">We will present the scientific and artistic collaboration currently implemented within the Perception and Sound Design team at IRCAM-STMS, in the framework of Valerian Fraisse's thesis, with the sound artist Nadine Sch&uuml;tz, composer in research at IRCAM. This collaboration aims to inform and accompany the composition of a perennial sound installation currently being created by Nadine Sch&uuml;tz. This installation, entitled \"Niches Acoustiques\", a winning project of the Participatory Budget of the City of Paris, is dedicated to an urban public space: the square of the new courthouse (Tribunal Judiciaire) of Paris. After a campaign of recordings and measurements allowing us to characterize the existing sound environment of the site, we seek, on the one hand, to inform the composition of this work based on laboratory listening tests and, on the other hand, to evaluate the impact of the installation on the urban soundscape in situ. During the presentation, we will jointly introduce the general framework of this mixed art/science research within a general sound design research approach, expose the methodology and the results of the first experimental phases of the study, and discuss the implications of this collaboration from a scientific and artistic point of views.</p>\n<p><img alt=\"HOA studio work session at IRCAM (Nadine Sch&uuml;tz, Val&eacute;rian Fraisse, Nicolas Misdariis)\" src=\"/media/uploads/user/8f7855382c826605d7a9d9ea81e27ffd.jpg\"></p>",
        "topics": [
            {
                "id": 919,
                "name": "art research collaboration",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 524,
                "name": "Design et traitement sonores",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 815,
                "name": "soundscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 918,
                "name": "urban",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17607,
            "forum_user": {
                "id": 17604,
                "user": 17607,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Sonic_Topologies_1257_b_cutsquare_smallsmall.jpg",
                "avatar_url": "/media/cache/b4/99/b499fa45336c40f5a3857c39a793e3a0.jpg",
                "biography": "Nadine Schütz is a sound artist, architect and composer from Switzerland, based in Paris. She explores the auditory landscape like an environmental interpreter and composes by developing the acoustic qualities and ambiences of a site. Space and place become thus a creative score that informs and directs its own transformation. Her compositions, performances and scenographic sound work have been presented in Zurich, Paris, London, Venice, Naples, New York, Moscow, Tokyo and Kyoto. Within urban development projects, her interventions combine the artistic reading of a site with the concern for augmenting its acoustic comfort and identity. Through an original combination of techniques derived from bio- and psychoacoustics, music, sculpture and landscape architecture, she creates sound installations and acoustic designs that participate tangibly in users' daily experiences. Nadine holds a PhD in landscape acoustics from ETH Zurich, where she installed a new studio for the spatial simulation of sonic landscapes. She teaches at ETH Zurich and Parsons Paris and is currently a guest composer in the Acoustic-and-Cognitive-Spaces and the Perception-and-Sound-Design Teams at IRCAM-STMS.",
                "date_modified": "2024-03-21T11:01:29.312466+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 766,
                        "forum_user": 17604,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "ns_echora",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "inform-and-evaluate-a-public-space-sound-installation-through-perceptual-evaluations-an-art-x-science-collaboration-niches-acoustiques-ii-1",
        "pk": 1366,
        "published": true,
        "publish_date": "2022-09-20T09:42:55.495030+02:00"
    },
    {
        "title": "Partiels - Keynote & Workshop - Exploring the content and characteristics of sounds",
        "description": "IRCAM Forum Workshops 2025 Hors-Les-Murs Rīga - Liepāja (Latvia) - 25 Sept. 2025, 14:30 - 15:15",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p>In this talk,&nbsp;<strong>Pierre Guillot</strong>&nbsp;will introduce&nbsp;<strong>Partiels</strong>, an open-source application developed at&nbsp;<strong>IRCAM</strong>&nbsp;for analyzing digital audio files and exploring sound characteristics. Leveraging&nbsp;<strong>Vamp plug-ins</strong>, the software extracts a wealth of audio descriptors&mdash;including spectrum, partials, pitch, tempo, text, and chords&mdash;enabling deep and multifaceted sound investigation.</p>\r\n<p>As the successor to&nbsp;<strong>AudioSculpt</strong>, Partiels offers a&nbsp;<strong>modern, flexible interface</strong>&nbsp;designed for visualizing, editing, and exporting analysis results. It bridges diverse needs, from&nbsp;<strong>musicological research</strong>&nbsp;to&nbsp;<strong>sound creation</strong>&nbsp;and&nbsp;<strong>signal processing</strong>&nbsp;applications.</p>\r\n<p>The presentation will detail Partiels'&nbsp;<strong>core functionalities</strong>, such as:</p>\r\n<ul>\r\n<li>Structured analysis workflows and audio file management,</li>\r\n<li>Interactive visualization and editing of results,</li>\r\n<li>Data export and sharing capabilities,</li>\r\n<li>Seamless interoperability with environments like&nbsp;<strong>Max</strong>&nbsp;and&nbsp;<strong>Pure Data</strong>.</li>\r\n</ul>\r\n<p>Additionally, Guillot will showcase&nbsp;<strong>IRCAM&rsquo;s specialized analysis plug-ins</strong>, many of which integrate&nbsp;<strong>machine learning models</strong>, as well as the&nbsp;<strong>IRCAM Vamp extension</strong>,&nbsp;an enhanced framework that addresses key limitations of the original Vamp format.</p>\r\n<p>This presentation will be followed by a workshop where participants will be invited to use the tools in concrete examples to discover the possibilities offered by these technologies.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4386ee581c71e53f779969e700880117.png\" /></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 576,
                "name": "Partiels",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "partiels-keybote-workshop-exploring-the-content-and-characteristics-of-sounds",
        "pk": 3574,
        "published": true,
        "publish_date": "2025-07-22T12:17:14+02:00"
    },
    {
        "title": "«Petite Violence: Breath» multichannel sound installation by Gin (Taiwan)",
        "description": "«Petite Violence: Breath» is a developing series of multisensory dynamic spacial installations that explores the intersection between sound and visual preconception, and the resulting connections, similarities and conflicts within individual sensory experiences. The project draws inspiration from the acoustic similarity between the sound of a manual balloon pump and human breathing, using the sonic interplay between the organic and the mechanical to evoke associations with the viewer's own somatic experience.",
        "content": "<p></p>\r\n<p><em>&laquo;Petite Violence: Breath&raquo;</em> is a developing series of&nbsp;multisensory, multichannel dynamic spatial installation that explores the intersections between sound and visual perception, and the resulting connections, similarities, and conflicts within individual&nbsp;sensory experiences.</p>\r\n<p><img alt=\"Petite Violence: Breath two versions\" src=\"https://forum.ircam.fr/media/uploads/user/fa9378af867762fb173a91a05afb09da.png\" /></p>\r\n<p>The project draws its creative inspiration from an incidental discovery &mdash; the sound of a manual balloon pump resembles the sound of rapid human breathing. Both sounds are akin to the swift passage of air through a narrow tube, creating a similar compression noise. The gradual inflation of a balloon becomes a visible metaphor for the accumulation of pressure. Its tension and the threshold of possible rupture mirror the human body&rsquo;s physiological and emotional responses under stress. The intention is not to pursue formal tension alone, but to use the layered sounds of everyday objects to evoke a bodily awareness&mdash;one that recalls sensory experiences often suppressed and overlooked through social conditioning. Through this process, the project seeks to make perceptible the subtle traces of perception that lie hidden within ordinary experience.&nbsp;</p>\r\n<p>Inspired by this concept, the early versions of this project employed a latex balloon inflation device and a multichannel speaker system, attempting to evoke a shared human experience through the interplay of balloon inflation sounds, human gasps, and the physical act of an inflating balloon.</p>\r\n<p>&nbsp;</p>\r\n<p><em>&laquo;Petite Violence: Breath&raquo; </em>&nbsp;evolved through two distinct stages.</p>\r\n<p><img alt=\"Petit Violence: Breath #1 documentation\" src=\"https://forum.ircam.fr/media/uploads/user/790e6c83aae851e7a559c1ce75f7ea32.png\" /></p>\r\n<p><a href=\"https://youtu.be/4426v2HHRvY\" title=\"Petit Violence: Breath #1\">The first version: Petit Violence: Breath #1</a></p>\r\n<p>In the first stage, the work was presented within a black box, where a six-channel sound system surrounded a central balloon pump, creating an enveloping field of sound. Viewers could move around the installation, immersed in layers of sound emitted from multiple directions&mdash;each channel carrying a unique fragment of the same unfolding narrative. Amid the dense sonic layers, a red latex balloon at the center gradually inflated toward its limit. Just as it seemed about to burst, a sudden explosive sound ruptured &mdash; yet the balloon itself remained intact, producing a sensory contradiction. As the balloon slowly deflated, the heightened tension began to dissipate and settle. Once it was fully deflated, the system automatically restarted, initiating a new cycle of inflation and release.</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"Petite Violence: Breath #2 documentation\" src=\"https://forum.ircam.fr/media/uploads/user/b187d49c57b5424367bc5b8462b1a74a.png\" /></p>\r\n<p><a href=\"https://youtu.be/UoaYw0cDzps\" title=\"Petite Violence : Breath #2\">The second version</a><a href=\"https://youtu.be/4426v2HHRvY\" title=\"Petit Violence: Breath #1\">: Petite Violence: Breath #2</a></p>\r\n<p>The second stage reduced the installation to a two-channel sound system within a semi-enclosed space, limiting the experience to one participant at a time and transforming it into an intimate bodily encounter. The core element was replaced by a single white latex balloon, installed at the center of minimalist white walls. Its inflation generated not only visual tension but also physical contact, directly challenging the viewer&rsquo;s sense of bodily boundary and psychological tolerance. Two speakers, placed at the far left and right, projected sound directly toward the listener&rsquo;s ears, compressing the sonic field into an unnatural proximity that deprived of spatial sense. The sound seemed to embed itself within the listener&rsquo;s head, blurring the boundaries between exterior sound, spatial perception, and the body. When the balloon reached its designed limit, a sharp, final intake of breath coincided with the extinguishing of the overhead light&mdash;leaving only the slow deflation of the balloon as the lingering resonance of the space.</p>\r\n<p>&nbsp;</p>\r\n<p>The third version of <em>&laquo;Petite Violence: Breath&raquo;</em>, set for IRCAM Forum Workshops Taipei 2025, will be a 4&ndash;6 channel sound installation, replacing the balloon pump with the proximity of sonic experience, offering an intimate, solo experience focused on how sound enhances bodily awareness and sensitivity to internal states.</p>\r\n<p><img alt=\"Layout Plan for Petite Violence: Breath #3\" src=\"https://forum.ircam.fr/media/uploads/user/303590537fb97364a2fc00143ec14426.png\" /></p>\r\n<p>&nbsp;</p>\r\n<p><em>&laquo;Petite Violence: Breath&raquo;</em>&nbsp; seeks to expose the latent structures of perception embedded in everyday life, rendering them into a tangible spatial experience. The work is not merely a representation of &ldquo;pressure,&rdquo; but a critical reflection on how emotional tension is endured, internalized, and ultimately normalized under the disciplinary forces of socialization. Through the interplay of sound and mechanical activation, the installation constructs of other spaces &ndash; one that invites the viewer to re-examine the relationship between bodily sensation, emotional awareness, and the structures of social alienation.</p>\r\n<p>In this process, <em>&laquo;Petite Violence: Breath&raquo;</em> confronts the boundaries of the body, evoking sensations that lie beneath conscious awareness &mdash; those suppressed, unspoken, yet undeniably present forms of feelings. The cyclic rhythm of inflation, strain, and release mirrors the oscillation between control and collapse, tension and relief. Like a recurring dream, it returns endlessly to the subconscious, where the invisible perceptions of everyday life quietly reveal themselves.</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 849,
                "name": "interactive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3537,
                "name": "multisensory",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3538,
                "name": "sound image",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 123014,
            "forum_user": {
                "id": 122850,
                "user": 123014,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/_.png",
                "avatar_url": "/media/cache/50/8f/508f0ae6cda63cae5d037132eca7b4cc.jpg",
                "biography": "Gin is a 25-year-old emerging multidisciplinary artist currently pursuing an MFA in New Media Art at Taipei National University of the Arts. Her work explores the dynamic interplay between inner perception and the external environment, inviting reflection on the fluid boundaries between awareness and reality. \nHer practice spans visual imagery, interactive installations, body art, and experimental sound, with works showcased in international exhibitions such as the SOUND/IMAGE Festival 2024 in Greenwich, London.\n\nBefore entering the art field, Gin earned a Bachelor's degree in Design and gained over two years of experience in visual design and design research. Her expertise includes graphic design, project planning, interactive design, creative coding, music post-production, and more.",
                "date_modified": "2025-11-04T20:44:48.085243+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "gin2811",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3849,
                    "user": 123014,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "petite-violence-breath-by-gin-taiwan",
        "pk": 3849,
        "published": true,
        "publish_date": "2025-10-13T11:33:06+02:00"
    },
    {
        "title": "\"The Demonstration of BCI Interactive Soundscape Interface for Soundscape Composition\" by Yi-hsien Chen (Taiwan)",
        "description": "The demo presents a “BCI Interactive Soundscape Interface” using the Ultracortex, a wearable EEG detector developed by Open BCI team. In this demo, I explore the potentials of bio-feedback systems in soundscape composition by employing EEG signals to manipulate the electronic sound processing modules and spatial parameters based on SPAT to transform the environmental sounds using user's mental states.",
        "content": "<p><span>This demo showcases a &ldquo;BCI Interactive Soundscape Interface&rdquo; (BCI stands for brain computer interface) using the Ultracortex, a wearable EEG headset developed by Open BCI team, integrated with Max/MSP. In this project, the Ultracortex records user&rsquo;s brainwave activity while he or she listens to various soundscape materials. These brainwave signals are then sent to Max/MSP to control electronic sound processing modules and spatialization in real-time within multi-channel system. The interface is designed to render sounds through 16-channel system, transforming the sounds into abstract sonic textures. The transformed sounds, in turn, influence the user&rsquo;s brain activity, establishing a continuous, bidirectional interaction between the mind and the surrounding evolving sounds. This project aims to provide an interface through which participants - including non-musician - can play soundscape materials using their own mental states.</span></p>",
        "topics": [],
        "user": {
            "pk": 24222,
            "forum_user": {
                "id": 24195,
                "user": 24222,
                "first_name": "Yi-hsien",
                "last_name": "Chen",
                "avatar": "https://forum.ircam.fr/media/avatars/Screenshot_2023-09-29_at_8.52.47_AM.png",
                "avatar_url": "/media/cache/b0/48/b0486d299f64cf90402c153961ce1028.jpg",
                "biography": "Yi-Hsien Chen is a Taiwanese composer. He has received degrees from Taipei National University of the Arts and National Taiwan Normal University. In 2016, he began to pursue Ph.D. with major in music theory and composition at UC San Diego where he studied with Katharina Rosenberger, Tom Erbe, and Lei Liang who is his advisor and committee. He was awarded with full scholarship from UC San Diego for five years. He is currently teaching at the Department of Music in National Sun-Yat Sen University.  \n\nChen composes in a wide range of musical styles and actively engages in interdisciplinary collaboration. His works encompass diverse instrumentations, including orchestra, chamber ensemble, electroacoustic music, theater, and film soundtrack. They have been selected and performed by renowned ensembles, institutions, and festivals, such as the Mivos Quartet at June in Buffalo, the National Taiwan Symphony Orchestra in the competition Voice of the New and Brilliant – The Sound of Formosa, the Weiwuying International Music Festival, and Taiwan C-Lab.",
                "date_modified": "2025-11-20T12:11:00.585570+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 859,
                        "forum_user": 24195,
                        "date_start": "2022-03-15",
                        "date_end": "2025-06-16",
                        "type": 0,
                        "keys": [
                            {
                                "id": 483,
                                "membership": 859
                            },
                            {
                                "id": 835,
                                "membership": 859
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "yihsien",
            "first_name": "Yi-hsien",
            "last_name": "Chen",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 394,
                    "user": 24222,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 24222,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "the-demonstration-of-bci-interactive-soundscape-interface-for-soundscape-composition-by-yi-hsien-chen-taiwan",
        "pk": 3772,
        "published": true,
        "publish_date": "2025-10-06T12:04:03+02:00"
    },
    {
        "title": "Métaphonies : musiques interactives et prosodies neurologiques",
        "description": "Résidence en recherche artistique 2017.18. \r\nMichelle Agnès Magalhaes.\r\nEn collaboration avec Béatrice Sauvageot, ainsi que les équipes Représentations musicales, Interaction Son Musique Mouvement de l’Ircam-STMS et le département de la Pédagogie et Actions culturelles.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">R&eacute;sidence en recherche artistique 2017.18</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p><strong>M&eacute;taphonies : musiques interactives et prosodies neurologiques</strong><br />En collaboration avec B&eacute;atrice Sauvageot, ainsi que les &eacute;quipes<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/repmus/\">Repr&eacute;sentations musicales</a>,<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/issm/\">Interaction Son Musique Mouvement</a><span>&nbsp;</span>de l&rsquo;Ircam-STMS et le d&eacute;partement de la P&eacute;dagogie et Actions culturelles.</p>\r\n<p>Fruit d&rsquo;une collaboration avec B&eacute;atrice Sauvageot (orthophoniste, chercheuse en neurosciences, fondatrice de l&rsquo;association Puissance Dys), cette recherche porte sur l&rsquo;&eacute;laboration d&rsquo;une proposition artistique &agrave; partir d&rsquo;une r&eacute;flexion sur le cerveau musicien. Par cela, nous comprenons les aspects neurologiques de l&rsquo;&eacute;coute et de la sensibilit&eacute; musicale, ind&eacute;pendamment de la pratique de l&rsquo;individu en tant que musicien. Bien plus que producteur de la jouissance esth&eacute;tique, le cerveau musicien est aussi le grand responsable en ce qui concerne les facult&eacute;s cognitives, &eacute;tant donn&eacute; sa capacit&eacute; &agrave; associer les exp&eacute;riences &eacute;motionnelles, motrices et sensorielles. Projet hybride de pi&egrave;ce musicale interactive et jeux musical pens&eacute; sous l&rsquo;optique th&eacute;rapeutique, M&eacute;taphonies est un projet d&rsquo;application pour smartphone et tablette destin&eacute; au public en g&eacute;n&eacute;ral mais aussi aux dyslexiques, personnes qui souffrent de troubles d&rsquo;apprentissage ou tout simplement &agrave; de personnes soucieuses d&rsquo;augmenter leurs capacit&eacute;s de cognition, de m&eacute;morisation et d&rsquo;imagination.</p>\r\n<h6></h6>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Michelle Agn&egrave;s Magalhaes</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\" style=\"text-align: center;\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202018/.thumbnails/michelle_agnes.jpg/michelle_agnes-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>N&eacute;e au Br&eacute;sil, laur&eacute;ate de la bourse Unesco-Aschberg en 2003, la musique de Michelle Agnes Magalhaes explore les limites entre geste et &eacute;criture, improvisation et composition. <span>Entre 2009 et 2011, elle joue en tant que pianiste dans l&rsquo;ensemble d&rsquo;improvisation Abaetetuba et en duo avec le contrebassiste Celio Barros.&nbsp;</span>Apr&egrave;s des &eacute;tudes &agrave;̀ l&rsquo;universit&eacute; de Sao Paulo, elle se perfectionne en composition musicale avec Salvatore Sciarrino (Acad&eacute;mie Chigiana de Siennne et conservatoire de Latina). En 2014-2015, elle int&egrave;gre l&rsquo;&eacute;quipe Analyse des pratiques musicales de l&rsquo;Ircam-STMS dans le cadre du projet GEMME (Geste musical : modèles et exp&eacute;riences). L&rsquo;ann&eacute;e suivante, elle poursuit une recherche postdoctorale en composition intitul&eacute;e &laquo; &Agrave; double entente : l&rsquo;invention du dialogue &eacute;criture &ndash; improvisation &raquo; au sein l'&eacute;quipe Repr&eacute;sentations musicales de l&rsquo;Ircam-STMS et de l'universit&eacute; Pierre et Marie Curie (UPMC-Sorbonne Universit&eacute;s). Elle collabore comme compositrice avec de nombreux ensembles (Abstra&iuml;, Percorso Ensemble, Arsenale, Accroche Note, Promenade Sauvage, ECCE, Bahia Blanca Soloists, Quarteto Prometeo, Flame Ensemble, Ensemble TaG Neue Musik, 20&deg; dans le noir, Talea Ensemble, Ensemble L'Itin&eacute;raire et Ensemble Multilat&eacute;rale). Depuis 2016, elle collabore avec B&eacute;atrice Sauvageot dans des projets alliant la musique et la neurologie, et dans DysOrchestre, ensemble de musiques improvis&eacute;es.</p>\r\n</div>\r\n</div>\r\n<p><strong>Courriel :</strong><span>&nbsp;</span>Michelle.Magalhaes (at) ircam.fr</p>\r\n<ul class=\"unstyled-list\">\r\n<li class=\"mb1\"><strong>&Eacute;quipe :<span>&nbsp;</span></strong><a href=\"https://www.ircam.fr/recherche/equipes-recherche/repmus/\">Repr&eacute;sentations musicales</a><span>&nbsp;</span>(Universit&eacute; Pierre-et-Marie-Curie (UPMC))<span>&nbsp;</span></li>\r\n</ul>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"https://www.michelleagnes.net/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>https://www.michelleagnes.net/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "metaphonies-musiques-interactives-et-prosodies-neurologiques",
        "pk": 21,
        "published": true,
        "publish_date": "2019-03-21T15:00:06+01:00"
    },
    {
        "title": "DAFNE+ : Lancement de la plateforme pour la préservation et la promotion de la musique expérimentale et de la production sonore",
        "description": "\"DAFNE+ offre aux créateurs de contenus numériques de nouvelles formes de création, de distribution et de monétisation de leurs œuvres d'art grâce à la technologie blockchain.\" Cette présentation, faite lors des Ateliers du Forum IRCAM @Paris 2024, fait le point sur le projet européen au moment du lancement de la plateforme.",
        "content": "<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sentateurs : Hugues Vinet et Greg Beller</p>\r\n<h1>DAFNE+ Platform is live!</h1>\r\n<p><a href=\"https://dafneplus.eng.it/\">https://dafneplus.eng.it</a></p>\r\n<p>Un moment historique dans le paysage de l'art et du design num&eacute;riques se produit avec le lancement officiel de la plateforme DAFNE+, une plateforme NFT de pointe con&ccedil;ue pour faire avancer la communaut&eacute; des artistes, des designers et des musiciens.</p>\r\n<p>La plateforme DAFNE+ est con&ccedil;ue pour r&eacute;pondre aux besoins &eacute;volutifs des cr&eacute;ateurs de contenu num&eacute;rique, en leur fournissant des outils innovants pour la cr&eacute;ation, la distribution et la mon&eacute;tisation de leurs &oelig;uvres artistiques par le biais de la technologie blockchain. \"L'un des principaux objectifs du projet est de rendre la distribution de contenu &eacute;quitable\".</p>\r\n<p>De mani&egrave;re intuitive et simple, sans avoir besoin de connaissances techniques en mati&egrave;re de blockchains/NFT, les communaut&eacute;s cr&eacute;atives sont invit&eacute;es &agrave; rejoindre l'organisation autonome d&eacute;centralis&eacute;e (DAO) offrant de nouveaux services et outils qui permettent la cr&eacute;ation et la cocr&eacute;ation de contenu dans une blockchain. La recherche de DAFNE+ se concentre &eacute;galement sur la d&eacute;finition de nouveaux \"business-models\" d'affaires &agrave; travers la distribution de contenu, permettant aux cr&eacute;ateurs et aux utilisateurs de mon&eacute;tiser les cr&eacute;ations multim&eacute;dias.</p>\r\n<p>Le r&ocirc;le de l'IRCAM dans DAFNE+ est notamment d'organiser&nbsp;la communaut&eacute; d'artistes&nbsp;de la musique exp&eacute;rimentale et production sonore. A mi-chemin entre le Forum de l'IRCAM et Sidney, l'archive du r&eacute;pertoire musical IRCAM, et bas&eacute;e sur une organisation autonome et une infrastructure distribu&eacute;e, la plateforme&nbsp;permet aux artistes, chercheurs et ing&eacute;nieurs de partager et de mon&eacute;tiser des &eacute;l&eacute;ments de technologie pour la production d'&oelig;uvres&nbsp;&eacute;lectroniques&nbsp;- biblioth&egrave;ques, patchs, documentations...</p>\r\n<ul>\r\n<li><span>Website:<span>&nbsp;</span></span><a href=\"https://dafneplus.eu\"><span>https://dafneplus.eu</span></a></li>\r\n<li><span>Platform:<span>&nbsp;</span></span><a href=\"https://dafneplus.eng.it/\"><span>https://dafneplus.eng.it</span></a></li>\r\n<li><span>Discord:<span>&nbsp;</span></span><a href=\"https://discord.gg/aR6VvV9Ttw\"><span>https://discord.gg/aR6VvV9Ttw</span></a></li>\r\n<li><span>Survey:<span><span>&nbsp;</span></span><a href=\"https://forms.gle/czcJyXhmthFkN5V48\">https://forms.gle/czcJyXhmthFkN5V48</a><a href=\"https://forms.gle/czcJyXhmthFkN5V48\"></a></span></li>\r\n<li><span>YT tutorials playlist:&nbsp;<a href=\"https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ\">https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ</a></span></li>\r\n<li><span>Newsletter:<span>&nbsp;</span></span><a href=\"https://dafneplus.eu/contact\"><span>https://dafneplus.eu/contact</span></a></li>\r\n<li>Contact:<span>&nbsp;</span><a href=\"mailto:info@dafneplus.eu\">info@dafneplus.eu</a></li>\r\n<li>Workshop:<span> <a href=\"https://forum.ircam.fr/article/detail/dafne-workshop-minting-content-on-the-platform/\">https://forum.ircam.fr/article/detail/dafne-workshop-minting-content-on-the-platform/</a></span></li>\r\n</ul>\r\n<p><strong>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1255,
                "name": "EU project",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1856,
                "name": "platform",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dafne-launch-of-the-platform-for-the-preservation-and-promotion-of-experimental-music-and-sound-production",
        "pk": 2778,
        "published": true,
        "publish_date": "2024-03-01T11:17:06+01:00"
    },
    {
        "title": "\"The Power of Sound\" by Felipe Sanchez Luna",
        "description": "KLING KLANG KLONG is a Berlin-based studio redefining sound scenography at the intersection of music, art, science, and technology. Through projects worldwide, they transform sound into a storytelling force—creating immersive experiences where audio takes center stage. In this talk, founder Felipe Sánchez Luna shares the studio’s philosophy and reveals how sound can reshape the way we perceive and connect with the world.",
        "content": "<h5><strong>➡️ This presentation is part of <a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></h5>\r\n<p><span>For over a decade, KLING KLANG KLONG has been at the forefront of sound scenography, pushing the boundaries between music, art, science, and technology. Through groundbreaking projects like For Seasons, Fjord &amp; Bertolt, Chasing Waterfall or Light Cloud , the studio has demonstrated how sound can transform spaces and evoke deep emotional connections. Their work spans museums, international fairs, interactive brand experiences, and immersive sound installations&mdash;each designed to make sound more than just a backdrop but a true narrative force.</span><br /><span>&nbsp;</span><br /><span>KLING KLANG KLONG approaches sound as a storyteller, shaping emotional experiences where music and sound design CAN take center stage. Rather than merely supporting visuals, sound becomes the main vehicle for storytelling, creating immersive, unforgettable environments.</span><br /><span>&nbsp;</span><br /><span>In this talk, founder, creative lead, and managing director Felipe S&aacute;nchez Luna will take you behind the scenes of some of KLING KLANG KLONG&rsquo;s most remarkable projects. He will explore the studio&rsquo;s core philosophy: the power of sound as a primary tool for storytelling. Building on insights from his TED Talk, Felipe will reveal how sound can not only enhance but completely redefine the way we experience the world around us.</span></p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/03f31500146a4956431af7bb3886a2b0.jpg\" width=\"973\" height=\"711\" /></p>\r\n<p></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2b5ee2ead7ec947ba59143be8ad1d0dc.png\" width=\"983\" height=\"552\" /></p>",
        "topics": [
            {
                "id": 3436,
                "name": "sound experiences",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3435,
                "name": "sound scenography",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1867,
                "name": "storytelling",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 149,
                "name": "Technology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4324,
            "forum_user": {
                "id": 4322,
                "user": 4324,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_3160_2_2.jpg",
                "avatar_url": "/media/cache/8a/00/8a000413e1bfe09c4fc4a51243d7af4c.jpg",
                "biography": "Felipe Sánchez Luna, from Bogotá and now based in Berlin, is a pioneer in sound design and interactive experiences. He co-founded kling klang klong, a studio known for its innovative sonic work blending film, music, dance, and technology into immersive soundscapes. With a background in creative coding, Felipe explores the intersection of art and technology, using generative music and intelligent audio engines to turn data into poetic auditory experiences. While highly skilled technically, he remains attuned to the socio-political context of his work, aiming to deepen understanding through sound.\n\nAt kling klang klong, Felipe is both creative and managing director, leading a multidisciplinary team of composers, designers, scientists, and technologists. Their projects span museums, art spaces, virtual worlds, and public events worldwide. Beyond studio work, Felipe shares his insights at major conferences and festivals, including TED Vancouver 2024, TEDx Berlin 2024, KIKK Festival, Hxouse Toronto, Music Tech Germany and AI conference. Despite his achievements, he continues to push boundaries, inspiring audiences to reflect on the role of sound in our lives.",
                "date_modified": "2025-12-18T12:31:42.594348+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "felipesanlu",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-power-of-sound-by-felipe-sanchez-luna",
        "pk": 4091,
        "published": true,
        "publish_date": "2025-12-18T12:30:54+01:00"
    },
    {
        "title": "Music from the Metaverse: 3D Illusory Immersive Soundscape “The Spirit of the Giant Tree”",
        "description": "Presented during the IRCAM Forum @NYU 2022.\r\n\r\nThis is the program note of my work: \r\nMusic from the Metaverse: 3D Illusory Immersive Soundscape “The Spirit of the Giant Tree”\r\n\r\nThe work is composed via ambisonic technique. There are links for the sound files in the article.",
        "content": "<p><strong>Program Note</strong></p>\r\n<p>Can a tree feel? What kind of consciousness does a tree have? &nbsp;<br />One day I saw a giant tree, with its rough bark and trunk and I wondered about its existence. I &nbsp;decided to explore possible answers by composing a piece of music about such a tree. &nbsp;<br />\u2028<br />This musical work depicts the growth of a giant tree, starting from a seedling, growing a rough, &nbsp;barky trunk, and all its life experiences of blooming, decay, and finally rebirth. No matter the &nbsp;environment, this tree puts all its energy into growing and stretching. The music is composed in an ambisonic system, surrounding the audience with the notion of a tree and its journey. It then plays &nbsp;in the first person conveying to the listener what the giant tree feels.&nbsp;<br />\u2028<br />In the middle of composing this work, I suddenly realized that the giant tree was more of an &nbsp;extension of Earth&rsquo;s consciousness, with an even longer life. When I closed my eyes and felt the &nbsp;music, I felt a primitive throbbing of the forests and could sense the ancient spirit of Earth and its &nbsp;power to recover and persevere, even though humans have brought her such harm. &nbsp;<br />\u2028<br />In this drastically changed 21st century, with new technologies and COVID-19, people are excited &nbsp;about the future, but cannot eclipse the feeling of eschatology. Meditating on this, I realized that the &nbsp;vitality of the earth will always be incredibly powerful and outlast humanity and other species. She &nbsp;gives life, which will always come and go, with eternal love.<br />\u2028<br />This work has been selected by international conferences including ICMC 2022 (Ireland), IRCAM Forum (New York, USA) 2022, SICMF2022 (Korea), and Atempor&aacute;nea 2022 (Argentina).&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>There are two listening format of this work:</p>\r\n<p>1. binaural, (Please use headphone to enjoy)<br /><a href=\"https://soundcloud.com/yi-chenglin/the-spirit-of-the-giant-tree\">https://soundcloud.com/yi-chenglin/the-spirit-of-the-giant-tree&nbsp;</a><br />2. immersive stereo (Please use stereo speakers to enjoy)<br /><a href=\"https://soundcloud.com/yi-chenglin/the-spirit-of-the-giant-tree-for-speakers-immersive-stereo\">https://soundcloud.com/yi-chenglin/the-spirit-of-the-giant-tree-for-speakers-immersive-stereo</a></p>",
        "topics": [
            {
                "id": 622,
                "name": "Immersiveaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 8487,
            "forum_user": {
                "id": 8484,
                "user": 8487,
                "first_name": "Zoe (Yi-Cheng)",
                "last_name": "Lin",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/3f7ac14247839fc28146480862faf659?s=120&d=retro",
                "biography": "Zoe Lin is a composer and software engineer, specializing in digital music. Her electronic compositions have achieved international acclaim, featured in 22 prestigious music festivals across 18 countries in Europe, Asia, North, and South America. Zoe holds a doctoral degree in composition from the University of Wisconsin-Madison. Previously, she worked as the Chief Music Officer at an AI music company, leading AI music generation research and development. Currently, Zoe is a full-time composer and part-time instructor at National Taiwan Normal University and Fu Jen Catholic University, teaching interdisciplinary courses that merge music and programming. She specializes in visual-auditory synesthetic electronic music, 3D immersive electronic music composition and mixing, and practical ambisonic system sound projection. Her work has been showcased globally, including events SiMN 2023 (Brazil), MUSLAB 2023 (Ecuador), MiRNArte 2023 (Venice, Italy), SICMF2023 (Seoul, South Korea), NIME 2023 (Mexico), NYCEMF 2023 (New York), Spatial Audio Conference 2023 (UK), NoiseFloor 2023 (UK), with upcoming features in SiMN 2023 and MUSLAB 2023's Phonographic Production - PLANETA COMPLEJO project.",
                "date_modified": "2026-02-23T08:10:15.722734+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ZoeLin",
            "first_name": "Zoe (Yi-Cheng)",
            "last_name": "Lin",
            "bookmarks": []
        },
        "slug": "music-from-the-metaverse-3d-illusory-immersive-soundscape-the-spirit-of-the-giant-tree",
        "pk": 1353,
        "published": true,
        "publish_date": "2022-09-16T10:31:22+02:00"
    },
    {
        "title": "Installation audio multicanal à Bahia - Neil Leonard",
        "description": "Cet article traite des installations sonores multicanaux créées pour une exposition à la Galleria Solar Ferrão lors d'une résidence à la Fondation Sacatar à Bahia, au Brésil. Le projet commémorait le 110e anniversaire de la naissance de l'artiste/mystique Walter Smetak (1913, Suisse - 1984, Brésil). Les pièces audio multicanal de Leonard ont été composées à partir d'enregistrements de saxophone alto traités en temps réel à l'aide de Max, d'enregistrements de terrain et d'enregistrements d'archives inédits de Walter Smetak jouant des instruments de sa conception, utilisés avec l'autorisation de sa famille.",
        "content": "<p></p>\r\n<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Neil Leonard<br /><a href=\"https://forum.ircam.fr/profile/nleonard/\">Biographie</a></p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/78f8f6e082cc04c31224200b3c676474.jpg\" width=\"305\" height=\"400\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\">Cet article traite des installations sonores multicanaux que j'ai cr&eacute;&eacute;es et de l'exposition que j'ai organis&eacute;e pendant la r&eacute;sidence 2023 &agrave; la Fondation Sacatar &agrave; Bahia, au Br&eacute;sil. Mon projet a comm&eacute;mor&eacute; le 110e anniversaire de la naissance de l'artiste/mystique Walter Smetak (1913, Suisse - 1984, Br&eacute;sil). Mes installations, ainsi que dix sculptures sonores de Smetak, ont &eacute;t&eacute; expos&eacute;es &agrave; la Galeria Solar Ferr&atilde;o &agrave; Salvador, Bahia, d'avril &agrave; juin 2023. Dans le cadre de mon projet, j'ai explor&eacute; une synth&egrave;se personnelle d'id&eacute;es, de sons et d'images, en &eacute;tudiant l'interconnexion des cultures du Nord et du Sud.</p>\r\n<p style=\"text-align: justify;\">Je me suis inspir&eacute;e de la pratique transdisciplinaire de Smetak. En Suisse, Smetak a commenc&eacute; sa carri&egrave;re comme violoncelliste et luthier, puis, vivant &agrave; Bahia, il a &eacute;tendu cette base pour cr&eacute;er une pratique transdisciplinaire qui incorpore la composition, l'improvisation, la conception d'instruments de musique, la peinture, la sculpture et la po&eacute;sie, anticipant ainsi les pratiques culturelles et esth&eacute;tiques contemporaines. Il a &eacute;t&eacute; influenc&eacute; par le milieu artistique local et a &eacute;galement &eacute;t&eacute; le mentor de figures cl&eacute;s du mouvement Tropicalia de Bahia, notamment Caetano Veloso, Gilberto Gil et Tom Z&eacute;.</p>\r\n<p style=\"text-align: justify;\">Au cours de ma r&eacute;sidence de sept semaines &agrave; Sacatar, sur l'&icirc;le d'Itaparica, j'ai explor&eacute; les liens entre Smetak et les communaut&eacute;s locales, ainsi que la splendeur naturelle de l'&icirc;le. J'ai &eacute;t&eacute; attir&eacute;e par l'une des cr&eacute;ations les plus ambitieuses de Smetak : un temple pyramidal de 22 m&egrave;tres de haut de la Soci&eacute;t&eacute; br&eacute;silienne des Eubiose, inaugur&eacute; en 1983.</p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">Mon installation, \"Frequ&ecirc;ncias Futuras\", comprend des r&eacute;pliques en bambou du temple-pyramide et de l'ob&eacute;lisque qui l'accompagne &agrave; Itaparica, une projection vid&eacute;o avec des images du temple et du son. La composition audio multicanal est compos&eacute;e d'enregistrements de mon saxophone alto, de sons &eacute;lectroniques, d'enregistrements de terrain et d'entretiens men&eacute;s au temple avec des membres de la soci&eacute;t&eacute;. Le son est diffus&eacute; par des haut-parleurs fabriqu&eacute;s sur mesure et log&eacute;s dans des calebasses situ&eacute;es &agrave; la base de la r&eacute;plique du temple.</p>\r\n<p style=\"text-align: justify;\">Mon installation, \"O Esp&iacute;rito Sopra\", comprend une vid&eacute;o monocanal avec des images de l'album de famille de Smetak, des photographies d'Itaparica, mes vid&eacute;os sur place et du son. Le son multicanal int&egrave;gre mon saxophone alto, l'&eacute;lectronique et des enregistrements de terrain de l'&icirc;le. Dans cette installation, j'explore le tissu conjonctif de la vie de Smetak, y compris le temps qu'il a pass&eacute; avec sa famille sur l'&icirc;le.</p>\r\n<p style=\"text-align: justify;\">L'exposition a suscit&eacute; des conversations parmi des milliers de visiteurs. Pour beaucoup d'entre eux, ces rencontres en personne pour cr&eacute;er et r&eacute;fl&eacute;chir sur l'art &eacute;taient les premiers liens qu'ils avaient nou&eacute;s depuis le d&eacute;but de la fermeture du COVID-19 au Br&eacute;sil, trois ans auparavant.</p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>\r\n<p></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 1747,
                "name": "Brazil",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1749,
                "name": "Instruo",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 36,
                "name": "Max ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2,
                "name": "MaxMSP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1125,
                "name": "multimedia",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 328,
                "name": "Pd",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1746,
                "name": "sound installation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1748,
                "name": "Walter Smetak",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4295,
            "forum_user": {
                "id": 4293,
                "user": 4295,
                "first_name": "Neil",
                "last_name": "Leonard",
                "avatar": "https://forum.ircam.fr/media/avatars/Leonard_BW_Fixed_SML.jpg",
                "avatar_url": "/media/cache/55/50/55500bcde7da810466654840da25ba67.jpg",
                "biography": "Neil Leonard is a composer, saxophonist and transdisciplinary artist. Leonard’s work includes concerts for ensembles with live electronics, audio/visual installation and multimedia performance. He maintains active collaborations in Canada, Cuba, China, Brazil, Burundi, Italy, Israel, Japan, Tawain and across the US. Leonard works with artist from film, video, installation, dance, and theater to create and perform music, often using immersive multichannel audio configurations. \n\nLeonard’s sound installations have been featured by Mass MoCA, Williams College Museum of Art, Peabody Essex Museum, Media Lab at MIT, Havana Bienal (Cuba), Bienal de Bahia (Brazil). Large scale installations and performances with Magdalena Campos, Fujiko Nakaya, Phill Niblock and Tony Oursler were featured by the Tate Modern, documenta, Venice Biennale, Whitney Biennale. He is a Professor at the Berklee College of Music and the Artistic Director of the Berklee Interdisciplinary Arts Institute.\n\nLeonard was a Sacatar Institute Fellow, Robert Rauschenberg Foundation, Artist-in-Residence; Fulbright Specialist Award recipient; M.I.T. Art, Culture and Technology Research Affiliate.",
                "date_modified": "2025-05-12T18:35:26.627151+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 601,
                        "forum_user": 4293,
                        "date_start": "2015-11-29",
                        "date_end": "2024-10-31",
                        "type": 0,
                        "keys": [
                            {
                                "id": 108,
                                "membership": 601
                            },
                            {
                                "id": 469,
                                "membership": 601
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "nleonard",
            "first_name": "Neil",
            "last_name": "Leonard",
            "bookmarks": []
        },
        "slug": "multichannel-audio-installation-in-bahia",
        "pk": 2721,
        "published": true,
        "publish_date": "2024-02-13T10:21:53+01:00"
    },
    {
        "title": "Noise from order  - Marion Cros",
        "description": "intervention pendant les Ircam Forum Workshop 2024. Le 22 mars à 11h.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par: Marion Cros<br /><a href=\"https://forum.ircam.fr/profile/corinamors/\">Biographie</a></p>\r\n<p></p>\r\n<p><b>Noise from order<br /></b><b><br /></b>De la n&eacute;cessit&eacute; de bruire&nbsp;</p>\r\n<p>Charivari, carnaval, casserolades&hellip; Perform&eacute; et ritualis&eacute; dans un cadre donn&eacute;, le bruit donne corps &agrave; une capacit&eacute; d&rsquo;agir, rendant possible ce que j&rsquo;appelle une &laquo; super-expression de soi &raquo; : une expression libre, et, souvent, une expression de puissance collective.&nbsp;</p>\r\n<p>Je fais l&rsquo;hypoth&egrave;se que l&rsquo;action bruyante trouve son origine dans un &eacute;tat de bruit int&eacute;rieur provoqu&eacute; par une dissonance cognitive : le crissement entre la n&eacute;cessit&eacute; visc&eacute;rale de pouvoir s&rsquo;exprimer librement, et les contraintes sociales, culturelles ou politiques qui l&rsquo;en emp&ecirc;chent. L&rsquo;&eacute;cart&egrave;lement entre le besoin de dire et l&rsquo;injonction &agrave; se taire.<br />Ce qui est silencieux n&rsquo;est-il alors que du bruit tu ?<br />Sans silenciation, pas de bruit ?</p>\r\n<p>L&rsquo;ext&eacute;riorisation de ce bruit mental en un bruit acoustique permettrait alors de retrouver de la consonance, d&rsquo;accorder son mode d&rsquo;action (ou de r&eacute;action) &agrave; l&rsquo;&eacute;tat travers&eacute; en dedans.&nbsp;<br />Par la ritualisation de cette expression acoustique, ses op&eacute;rateurices acc&egrave;dent &agrave; un &eacute;tat aussi f&eacute;roce qu&rsquo;exalt&eacute;, et adoptent une attitude &agrave; la fois fascin&eacute;e et prudente &agrave; l&rsquo;&eacute;gard d&rsquo;un d&eacute;sordre n&eacute;cessaire.</p>\r\n<p>&Agrave; la mani&egrave;re d&rsquo;une onde porteuse, la vibration bruyante se ferait messag&egrave;re, et (&eacute;)conduirait la violence afin de l&rsquo;&eacute;loigner et de la renvoyer &agrave; ceux qui en sont responsables, ouvrant ainsi la possibilit&eacute; d&rsquo;initier un cycle nouveau.&nbsp;</p>\r\n<p>Je souhaite explorer comment, au sein de diff&eacute;rents contextes sociaux, rituels ou l&eacute;gendaires, le recours au bruit s&rsquo;impose comme un mode op&eacute;ratoire particuli&egrave;rement pertinent.&nbsp;<br />Je situe cet outil - le bruit - dans un spectre allant du pr&eacute;verbal &agrave; la saturation langagi&egrave;re.</p>\r\n<p>Je souhaite questionner sa dimension opaque et trouble, son &eacute;paisseur substantielle, et ouvrir une enqu&ecirc;te po&eacute;tique m&ecirc;lant linguistique, acoustique, sociologie, fiction et ethnologie, afin d&rsquo;imaginer et d&rsquo;envisager un d&eacute;veloppement technologique de ce proc&eacute;d&eacute;. &nbsp; &nbsp;</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 28752,
            "forum_user": {
                "id": 28724,
                "user": 28752,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1002dd8ef24c5b10c23ef223a0f6825f?s=120&d=retro",
                "biography": "Trained at the Haute Ecole des Arts du Rhin, the Beaux-arts de Bourges and the Angoulême\nmusic school, Marion Cros is a musician and sound technician.\nWorking in the fields of noise, electroacoustics and live performance she interweaves the poetics of listening, audiophilia and bricodage.\n\nHer personal practice focuses on the use of noise as a form of action as a form of action and reaction to symbolic violence.\nShe constructs and activates devices that scramble, exhaust, but also distill speech, as part of a protean research perspective, at the crossroads of theory, narrative, document and sound.",
                "date_modified": "2025-12-18T10:27:49.150999+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "corinamors",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "noise-from-order",
        "pk": 2732,
        "published": true,
        "publish_date": "2024-02-15T10:49:14+01:00"
    },
    {
        "title": "Voices from Resonant Spaces",
        "description": "Un court documentaire qui suit Iegor Reznikoff dans l'exploration sonore de la grotte d'Arcy au cours de laquelle il revient sur ce qui a fait le coeur de sa pensée, sa vision sur l'utilisation de la voix dans les espaces résonants, au cours des âges depuis les temps préhistoriques.",
        "content": "<p>Film (court) documentaire r&eacute;alis&eacute; par Eric Raynaud.</p>\r\n<p><a href=\"https://vimeo.com/424758568\">https://vimeo.com/424758568</a></p>\r\n<p>En novembre 2019, &agrave; l'invitation du professeur Iegor Reznikoff, Paul Oomen (Spatial Sound Institute, 4DSOUND) et Eric Raynaud (Fraction) ont suivi son exploration sonore de la grande grotte d'Arcy qui poss&egrave;de des peintures pari&eacute;tales pr&eacute;historiques sur ses murs.</p>\r\n<p>Tel un rite initiatique, il profite de cette exploration pour expliquer sa perception sur l'utilisation des voix humaines &agrave; travers les &acirc;ges dans les espaces r&eacute;sonnants, et dans le cas des grottes, les corr&eacute;lations possibles observ&eacute;es avec les emplacements des peintures anciennes, une approche qui a guid&eacute;e &agrave; la fois son parcours en tant que chercheur et artiste.&nbsp;</p>\r\n<p><img src=\"/media/uploads/user/58d76983b1db74a4c024eca497854628.jpg\" alt=\"\" width=\"1781\" height=\"1000\" /></p>\r\n<p>L'exploration, con&ccedil;ue comme un projet transversal pluridisciplinaire (anthropologique, scientifique, artistique) &eacute;tait en particulier d&eacute;di&eacute;e &agrave; la collecte de mat&eacute;riaux sonores en utilisant plusieurs moyens:</p>\r\n<p>- Enregistreur binaural plac&eacute; sur Paul Oomen pour documenter le chemin complet du voyage<br />- Deux microphones sph&eacute;riques &agrave; 19 capsules (Zylia) pour capturer au format ambisonique plusieurs longues s&eacute;quences sonores de l'&eacute;quipe progressant &agrave; travers la grotte.</p>\r\n<p>- Microphone &laquo;Cravate&raquo; plac&eacute; sur M. Reznikoff pour enregistrer voix direct, conversations, intonations et murmures<br />- R&eacute;ponses impulsionnelles multicanales de la grotte mesur&eacute;es avec un microphone Zylia gr&acirc;ce &agrave; l'Ircam Spat.&nbsp;</p>\r\n<p><img src=\"/media/uploads/user/7f10c898f7878694fca47182ad3d781b.jpg\" alt=\"\" width=\"1781\" height=\"1025\" /></p>\r\n<p>Cette courte s&eacute;quence vid&eacute;o capture un moment particulier du voyage lorsque le professeur Reznikoff explique certaines des id&eacute;es fondamentales derri&egrave;re les th&eacute;ories qu'il d&eacute;veloppe depuis des d&eacute;cennies, invoquant notamment le chamanisme et le r&ocirc;le primordial de la voix dans ce rituel.</p>\r\n<p>&Agrave; propos de l'enregistrement sonore de la vid&eacute;o: le mix comporte un design sonore st&eacute;r&eacute;o tr&egrave;s l&eacute;ger plac&eacute; &agrave; l'arri&egrave;re-plan d'une couche vocale de Iegor constitu&eacute;e d'un pr&eacute;-mixage entre voix direct, rendus binaural du micro zylia, et materiaux r&eacute;solus dans la reverbe &agrave; convolution synth&eacute;tis&eacute;e avec les mesures de la grotte.</p>\r\n<p><img src=\"/media/uploads/user/d02753695520523e4c96ea7021bb641f.jpg\" alt=\"\" width=\"1781\" height=\"1179\" /></p>\r\n<p>Ce court m&eacute;trage fait partie d'une documentation plus large qui est actuellement collect&eacute;e sous diff&eacute;rents biais sur l'approche singuli&egrave;re d'Iegor Reznikoff, abordant l'importance de l'utilisation de la voix humaine dans l'appr&eacute;hension des espaces audibles, son r&ocirc;le dans le fa&ccedil;onnement de la perception humaine de ces espaces, les comportements socio-culturels qui en r&eacute;sultent, et au-del&agrave;, l'&eacute;mergence de pratiques artistiques.</p>\r\n<p>Film (audio,video) r&eacute;alis&eacute; par Eric Raynaud.</p>\r\n<p>Equipe de r&eacute;alisation:<br />Eric Raynaud (Fraction)<br />Iegor Reznikoff<br />Paul Oomen (Spatial Sound Institute, 4DSOUND)</p>\r\n<p>Prises cam&eacute;ra:<br />Eric Raynaud</p>\r\n<p>Prsies sons:<br />Eric Raynaud<br />Paul Oomen</p>\r\n<p>Avec l'aide de :<br />Charly Gouault (Grottes d'Arcy, Guide)<br />Emma Giraud (Arcy Caves, appareil photo)</p>\r\n<p><strong>A propos:</strong><br />-Iegor Reznikoff:<br /><a href=\"https://dep-philo.parisnanterre.fr/departement-de-philosophie/les-enseignants/reznikoff-iegor-66768.kjsp\">dep-philo.parisnanterre.fr/departement-de-philosophie/les-enseignants/reznikoff-iegor-66768.kjsp</a></p>\r\n<p>-Spatial Sound Institute<br /><a href=\"https://www.spatialsoundinstitute.com\">Spatial Sound Institute (Budapest)</a></p>\r\n<p>Eric Raynaud (Fraction)</p>\r\n<p><a href=\"https://www.instagram.com/fraction_is_noise/\">Instagram</a><br /><a title=\"Fraction Website\" href=\"http://www.fractionisnoise.art\">Official website</a></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 445,
                "name": "Accoustique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 440,
                "name": "Archeosacoustique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 443,
                "name": "Chamanisme",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 446,
                "name": "Convolution",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 451,
                "name": "Documentaire",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 453,
                "name": "Ericraynaud",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 449,
                "name": "Espacesaccoustiques",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 452,
                "name": "Film",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 442,
                "name": "Grotte",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 447,
                "name": "Impulsions",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 450,
                "name": "Ircamspat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 448,
                "name": "Mesures",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 146,
                "name": "Perception",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 444,
                "name": "Prehistoire",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 441,
                "name": "Resonance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 403,
                "name": "Reverberation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1709,
            "forum_user": {
                "id": 1707,
                "user": 1709,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profil4.png",
                "avatar_url": "/media/cache/49/37/4937ce84289a16db6f9d5ea374376dfb.jpg",
                "biography": "Fraction (Eric Raynaud) is a new media, composer and sound artist whose work focuses in particular on immersive and audiovisual experience  design.\n\nHis practice has developed from a background in music composition and spatial sound which led him to put together complete skills in the field of new media art. He now devotes his time writing and producing pieces integrating digital materials of different kinds.  He is particularly interested in forms of experience that have strong interactions between generative art and sonic matter. Combining complex scenography and hybrid digital writing with visuals, sound and physical media, he aims in particular to forge links between contemporary art and digital scope within the frame of radical experiences.\n\nFascinated by sound intensity, energy, ecstasy, and the idea of \"being able to sculpt digital disorder as a raw matter\", he finds in the lexicon of sound spatialization the appropriate field for designing atypical pieces, placing at the center of his writing the immediate physical and emotional experience.",
                "date_modified": "2025-12-29T12:55:11.027970+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fraction",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "voices-from-resonant-spaces",
        "pk": 718,
        "published": true,
        "publish_date": "2020-07-02T09:30:55+02:00"
    },
    {
        "title": "EaganMatrix Compiler: Automated Assembly Code Optimization by Cameron Fuller",
        "description": "Runtime code generation for efficiently evaluating the EaganMatrix.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/65bfb9ac1008696ea9542ef9c3dffef4.png\" width=\"1027\" height=\"333\" /></p>\r\n<p>Presented by : Cameron Fuller</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/cameronfuller01/\" target=\"_blank\">Biography</a></p>\r\n<p></p>\r\n<p>Advanced sound engines with complex synthesis algorithms require low-latency sample generation to keep up with real-time synthesis. Microprocessors such as Analog Devices&rsquo; SHARC make use of DSP-specific architecture to facilitate high performance, but tailored code generation at runtime remains an underutilized opportunity for optimization. Here I describe the EaganMatrix Compiler (EMC), a code-generating algorithm implemented on the SHARC to optimize audio sample generation for the EaganMatrix, the internal sound engine of the Haken Continuum Fingerboard. With Single Instruction/Multiple Data (SIMD) capabilities of the SHARC, the EMC&rsquo;s generated code can evaluate the EaganMatrix at 500 picoseconds per matrix point, twice as fast as the EaganMatrix&rsquo;s previous optimizations and 40 times faster than functionally identical code generated by Analog Devices&rsquo; C/C++ compiler for SHARC. This improved efficiency reduces the computational demand of complex EaganMatrix presets, especially for high polyphony.</p>\r\n<p></p>",
        "topics": [
            {
                "id": 331,
                "name": "Compilers",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2493,
                "name": "Continuum Fingerboard",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2491,
                "name": "embedded",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2492,
                "name": "firmware",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2494,
                "name": "Haken Audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 87661,
            "forum_user": {
                "id": 87558,
                "user": 87661,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_6623_RKLLdV7.jpg",
                "avatar_url": "/media/cache/dd/1a/dd1a6558351602bb986390ea63152cc1.jpg",
                "biography": "Cameron Fuller is an engineer with Haken Audio in Champaign, Illinois, United States. He received his Bachelor of Science in Electrical Engineering and Bachelor of Music from the University of Illinois.\nAt Haken Audio, he has contributed to the development of expressive electronic musical instruments such as the Continuum Fingerboard by managing device assembly and developing embedded firmware.",
                "date_modified": "2025-10-10T20:57:29.270045+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cameronfuller01",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "eaganmatrix-compiler-automated-assembly-code-optimization-by-cameron-fuller",
        "pk": 3185,
        "published": true,
        "publish_date": "2024-12-26T00:34:27+01:00"
    },
    {
        "title": "La Sonification comme Technique de Composition et Moyen d'Expression Artistique - Maria Kallionpää",
        "description": "La question clé est de savoir comment former une idée centrale artistiquement innovante d'une pièce musicale et de trouver les instruments les plus idéaux pour la réaliser ?",
        "content": "<p></p>\r\n<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"330\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;<span>Maria Kallionp&auml;&auml;</span><br /><a href=\"https://forum.ircam.fr/profile/makallio/\">Biographie</a></p>\r\n<p>Malgr&eacute; la vari&eacute;t&eacute; sans cesse croissante des moyens technologiques accessibles aux compositeurs, interpr&egrave;tes, orchestrateurs, producteurs de musique et p&eacute;dagogues musicaux de notre &eacute;poque, l'essence de la composition musicale est rest&eacute;e essentiellement la m&ecirc;me depuis des si&egrave;cles : la question cl&eacute; est de savoir comment former une id&eacute;e centrale artistiquement innovante d'une pi&egrave;ce musicale et de trouver les instruments les plus id&eacute;aux pour la r&eacute;aliser.&nbsp;</p>\r\n<p>Ce&nbsp;texte traite des explorations des compositeurs pour &eacute;tablir leur voix individuelle, ainsi que pour trouver et s&eacute;lectionner les outils et les techniques qui serviraient le mieux leurs objectifs artistiques dans le r&eacute;seau complexe et pluraliste de l'esth&eacute;tique d'aujourd'hui. Pour faciliter cette d&eacute;marche, diverses solutions logicielles sont apparues sur le march&eacute;, permettant d'utiliser une plus grande palette de sons comme \"mati&egrave;re premi&egrave;re\" musicale. Au lieu d'une musique pr&eacute;-compos&eacute;e bas&eacute;e sur les rythmes et les hauteurs organis&eacute;s par le compositeur, nous nous concentrerons sur la mani&egrave;re de traduire d'autres types de donn&eacute;es en \"notes\", ou plus g&eacute;n&eacute;ralement en \"&eacute;v&eacute;nements sonores\", et sur la mani&egrave;re de les utiliser d'une mani&egrave;re artistiquement significative, aboutissant &agrave; des compositions musicales compl&egrave;tes &agrave; part enti&egrave;re. Pour ce faire, nous pr&eacute;senterons des &eacute;tudes de cas de composition bas&eacute;es sur la sonification, r&eacute;alis&eacute;es &agrave; l'aide de diverses m&eacute;thodes et technologies. Celles-ci incluent, par exemple, l'application du logiciel ORCID dans la pratique artistique. Nous pr&eacute;senterons l'&oelig;uvre &agrave; th&egrave;me environnemental de Maria Kallionpaa \"El Canto del Mar Infinito\" (2020), ainsi que sa composition \"The Reef\" (2023), dont le mat&eacute;riau musical est bas&eacute; sur l'analyse informatique des sons enregistr&eacute;s sur un r&eacute;cif corallien. En outre, nous discuterons de l'&oelig;uvre d'Olga Neuwirth \"Kloing !\", dont une partie du mat&eacute;riau provient des donn&eacute;es sismiques recueillies dans la r&eacute;gion de Sumatra lors du tremblement de mer qui a provoqu&eacute; la catastrophe du tsunami le 26 d&eacute;cembre 2004.</p>\r\n<p>Les techniques de sonification ne produisent pas d'esth&eacute;tique musicalement utilisable en soi. Cela soul&egrave;ve la question des domaines cr&eacute;atifs dans lesquels les compositeurs s'engagent lorsqu'ils impliquent des techniques de sonification dans leur processus. Ces domaines comprennent :</p>\r\n<ul>\r\n<li>Le choix des donn&eacute;es ou du sujet &agrave; sonifier</li>\r\n<li>Le pr&eacute;traitement de ces donn&eacute;es (par exemple, la quantification d'une s&eacute;rie de valeurs &agrave; virgule flottante en valeurs de hauteur enti&egrave;res et en temps m&eacute;tronomiques)</li>\r\n<li>Une conception sp&eacute;cifiquement musicale de la m&eacute;thode de sonification, c'est-&agrave;-dire la transformation ou le mappage des donn&eacute;es en structures sonores</li>\r\n<li>Un travail cibl&eacute; sur les param&egrave;tres de la sonification.</li>\r\n</ul>\r\n<p>Une autre approche a &eacute;t&eacute; explor&eacute;e par le syst&egrave;me TouchNoise (2014-2017), d&eacute;velopp&eacute; par Axel Berndt, Nadia Al-Kassab et Raimund Dachselt. Il s'agit d'une sonification d'une simulation de particules. Le domaine de la composition est ici couvert principalement par une palette de techniques d'interaction avec le champ de particules, y compris des manipulations directes de la distribution des particules ainsi que des algorithmes de champ d'&eacute;coulement et de flocage. Nous discuterons de l'esth&eacute;tique que cette approche &eacute;voque.</p>\r\n<p><span style=\"text-decoration: underline;\">Auteurs :</span> Maria Kallionpaa, Prof. Dr.-Ing. Axel Berndt, Mag. Rer. Nat., Dipl. Hans-Peter Gasselseder</p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/maria_piano_rauhalammi_copy_-_maria_kallionpaa.jpg\" alt=\"\" width=\"496\" height=\"330\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18143,
            "forum_user": {
                "id": 18137,
                "user": 18143,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Maria_Piano_Rauhalammi_copy.jpg",
                "avatar_url": "/media/cache/c9/0b/c90b3c3785be3f428a1fe305799480ce.jpg",
                "biography": "Dr. Maria Kallionpää is an internationally active composer and pianist, working as an artistic researcher at the Hochschule für Musik Detmold (2023-). Kallionpää was an assistant professor of composition and contemporary music performance at the Hong Kong Baptist University (2018-2022), and has been a composer in residence of the Mixed Reality Laboratory of the University of Nottingham since 2016 until present. 2016-2018 she worked as a postdoctoral fellow at the University of Aalborg, her artistic research focusing on gamification as a composition technique. Kallionpää obtained her PhD in composition at the university of Oxford in 2015. Furthermore, as a winner of the Fabbrica Young Artist Development Program of Opera di Roma, Kallionpää was commissioned an opera that was premiered at Teatro Nazionale Rome in 2017. In collaboration with her colleague Markku Klami, Kallionpää composed the first full length puppet opera produced in the Nordic Countries (premiered in 2018). She was a laureate of Académie de France à Rome in 2016.",
                "date_modified": "2024-12-17T18:28:53.120831+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "makallio",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sonification-as-a-composition-technique-and-means-of-artistic-expression",
        "pk": 2713,
        "published": true,
        "publish_date": "2024-02-05T17:13:20+01:00"
    },
    {
        "title": "Trois états de cire :  La nature du matériau dans l'improvisation électronique en direct - Juan Parra Cancino, Jonathan Impett",
        "description": "Cette présentation explore les approches créatives, performatives et techniques employées par le duo Three States of Wax : Jonathan Impett (trompette et électronique) et Juan Parra Cancino (guitare et électronique).\r\n\r\nLe concept de matérialité de l'improvisation est au cœur de cette exploration. Le titre et l'approche sont inspirés par l'enquête du philosophe des sciences Michel Serres sur les matériaux de la physique. En prolongeant l'expérience de pensée de Descartes impliquant un morceau de cire (toujours le même, mais toujours changeant), Serres identifie trois perspectives : l'objet tel qu'il est perçu, l'objet tel qu'il est décrit à travers ses propriétés, et l'objet en tant que nœud informationnel. Ce dernier point de vue englobe non seulement toute l'histoire de son origine et de ses interactions, mais il se transforme continuellement à chaque rencontre ou examen. Serres soutient que cette perspective est particulièrement pertinente dans notre monde dominé par l'information.",
        "content": "<p><strong><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /></strong>Pr&eacute;sent&eacute; par :&nbsp;Juan Parra Cancino, Jonathan Impett<br /><a href=\"https://forum.ircam.fr/profile/jotaparra/\">Biographie</a></p>\r\n<p><strong><br />Trois &eacute;tats de cire :</strong></p>\r\n<p><strong>La nature du mat&eacute;riau dans l'improvisation &eacute;lectronique en direct</strong></p>\r\n<p><strong>Juan Parra Cancino &amp; Jonathan Impett. Institut Orpheus, Gand.</strong></p>\r\n<p>Cette pr&eacute;sentation explore les approches cr&eacute;atives, performatives et techniques employ&eacute;es par le duo <em>Three States of Wax</em> : Jonathan Impett (trompette et &eacute;lectronique) et Juan Parra Cancino (guitare et &eacute;lectronique).</p>\r\n<p>Le concept de mat&eacute;rialit&eacute; de l'improvisation est au c&oelig;ur de cette exploration. Le titre et l'approche sont inspir&eacute;s par l'enqu&ecirc;te du philosophe des sciences Michel Serres sur les mat&eacute;riaux de la physique. En prolongeant l'exp&eacute;rience de pens&eacute;e de Descartes impliquant un morceau de cire (toujours le m&ecirc;me, mais toujours changeant), Serres identifie trois perspectives : l'objet tel qu'il est per&ccedil;u, l'objet tel qu'il est d&eacute;crit &agrave; travers ses propri&eacute;t&eacute;s, et l'objet en tant que n&oelig;ud informationnel. Ce dernier point de vue englobe non seulement toute l'histoire de son origine et de ses interactions, mais il se transforme continuellement &agrave; chaque rencontre ou examen. Serres soutient que cette perspective est particuli&egrave;rement pertinente dans notre monde domin&eacute; par l'information.</p>\r\n<p>La musique &eacute;lectroacoustique improvis&eacute;e est paradigmatique des pratiques contemporaines dans diverses dimensions. Elle soul&egrave;ve des questions fondamentales sur la nature du mat&eacute;riau et sa repr&eacute;sentation, l'attribution de la paternit&eacute; et l'&eacute;mergence de la structure &agrave; travers le temps. Dans une culture de syst&egrave;mes de performance hautement personnalis&eacute;s, la manipulation de mat&eacute;riaux partag&eacute;s devient une question technique centrale.</p>\r\n<p>Dans <em>Three States of Wax</em>, le mat&eacute;riau est manipul&eacute; au sens de Serres - non seulement en tant que repr&eacute;sentation (g&eacute;n&eacute;r&eacute;e a priori ou en temps r&eacute;el), mais aussi en tant que traces de son interrogation et de sa m&eacute;diation. &Agrave; cet &eacute;gard, notre travail est en accord avec les recherches actuelles dans le domaine des humanit&eacute;s num&eacute;riques - le c&oelig;ur du syst&egrave;me pourrait &ecirc;tre consid&eacute;r&eacute; comme une carte de connaissances en &eacute;volution. L'information est &eacute;chang&eacute;e, compar&eacute;e et trait&eacute;e &agrave; travers un spectre de modalit&eacute;s. La distinction entre ces modalit&eacute;s est la force motrice de la musique, un peu comme un moteur lacanien avec des facettes r&eacute;elles, symboliques et imaginaires. <em>Three States of Wax</em> introduit une nouvelle couche dans le domaine du traitement des instruments en direct. Ce qui &eacute;tait auparavant des syst&egrave;mes individuels personnalis&eacute;s apparemment incommensurables se transforme en un r&eacute;seau dynamique. Le \"moteur de diff&eacute;rence\" central devient une voix autonome, utilisant des sons analogiques pour d&eacute;voiler la dynamique inh&eacute;rente &agrave; la structure &eacute;mergente.</p>\r\n<p>Cette pr&eacute;sentation explore les concepts et les strat&eacute;gies utilis&eacute;s par Impett et Parra pour &eacute;tablir ce r&eacute;seau sonore. Des &eacute;l&eacute;ments tels que les hauntologies et les principes rappelant la vie artificielle contribuent tous &agrave; la cr&eacute;ation de multiples chemins ramifi&eacute;s et de leurs d&eacute;viations. Ces chemins sont con&ccedil;us pour agir comme une r&eacute;sistance tangible et instrumentale au sein du syst&egrave;me partag&eacute; et de ses interpr&egrave;tes.<br /><br /><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong><br /><br /></p>",
        "topics": [
            {
                "id": 1825,
                "name": "a-life",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1822,
                "name": "free improvisation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1824,
                "name": "hauntologies",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1823,
                "name": "networks",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1821,
                "name": "Serres",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27798,
            "forum_user": {
                "id": 27770,
                "user": 27798,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/JUANOPERA.png",
                "avatar_url": "/media/cache/c7/62/c7624839fd9abf2ecdda050db1e1a048.jpg",
                "biography": "Juan Parra Cancino studied Composition at the Catholic University of Chile and Sonology at The Royal Conservatoire The Hague (NL), where he obtained his Masters degree with focus on composition and performance of electronic music. In 2014, Juan obtained his PhD degree from Leiden University with his thesis “Multiple Paths: Towards a Performance practice in Computer Music”. His compositions have been performed in Europe, Japan, North and South America. Founder of The Electronic Hammer, a Computer and Percussion trio and Wiregriot, (voice & electronics), he collaborates regularly with Ensemble KLANG (NL) and Hermes (BE), among many others. His work in the field of live electronic music has made him recipient of numerous grants such as NFPK, Prins Bernhard Cultuurfonds and the International Music Council. Since 2009 Juan has been appointed as a joint researcher of the Orpheus Institute Research Centre in Music to work on the topics of creativity and performance applied to electronic music.\n\nJuan has recently been appointed as Regional Director for Europe of the International Computer Music Association for the period 2022-2026.",
                "date_modified": "2025-12-25T18:30:21.585575+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 670,
                        "forum_user": 27770,
                        "date_start": "2025-12-19",
                        "date_end": "2026-12-19",
                        "type": 0,
                        "keys": [
                            {
                                "id": 205,
                                "membership": 670
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "jotaparra",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 27798,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2754,
                    "user": 27798,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "three-states-of-wax-the-nature-of-material-in-live-electronic-improvisation",
        "pk": 2754,
        "published": true,
        "publish_date": "2024-02-19T11:26:27+01:00"
    },
    {
        "title": "OMaxVideo - Georges Bloch",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>OmaxVideo a d&rsquo;abord &eacute;t&eacute; d&eacute;velopp&eacute; pour faire de la vid&eacute;o avec OMax. Au d&eacute;part, cela consistait &agrave; diffuser des vid&eacute;os des interpr&egrave;tes, vid&eacute;os correspondantes aux sons choisis par OMax (ICMC 2008). C&rsquo;est un programme Max/jitter.</p>\r\n<p></p>\r\n<p>Mais, en fait, OMaxVideo a &eacute;t&eacute; depuis utilis&eacute; avec tous les logiciels de la galaxie Omax (OMax, Somax, DYCI2, Improtek, Djazz, etc.). Surtout il est pr&eacute;vu pour fonctionner avec tous les logiciels faisant de la synth&egrave;se concat&eacute;native, lorsque l&rsquo;on a des images li&eacute;es au sons ; on peut penser &agrave; Catart, ou simplement &agrave; Live dans la plupart de ses utilisations avec des samples). Une des grandes originalit&eacute;s de OMaxVideo est qu&rsquo;il est con&ccedil;us comme un programme o&ugrave; l&rsquo;image est avant tout d&eacute;pendante de la musique.</p>\r\n<p></p>\r\n<p>La derni&egrave;re version a &eacute;t&eacute; enti&egrave;rement r&eacute;&eacute;crite et poss&egrave;de des fonctionnalit&eacute;s pratiques en situation de concert (collections de films et photos, presets, etc.).</p>\r\n<p></p>\r\n<p>En bref :</p>\r\n<p>Si (une partie de) votre musique utilise des extraits de musiques existantes (ou &eacute;chantillons un peu longs)</p>\r\n<p>Si vous avez des images attach&eacute;es &agrave; ces &eacute;chantillons (images des interpr&egrave;tes en train de jouer, photos, animations, etc.)</p>\r\n<p>Si vous d&eacute;sirez une synchronisation stricte (y compris en temps r&eacute;el) entre ce montage d&rsquo;extraits et les images (c&rsquo;est &eacute;videmment le cas si les images sont celles d&rsquo;une ou un interpr&egrave;te qui joue la musique ou danse sur elle)</p>\r\n<p>OMaxVideo est pour vous !</p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/capture_d&rsquo;&eacute;cran_2023-03-24_&agrave;_10.13.21.png\" alt=\"\" width=\"1077\" height=\"688\" /></p>\r\n<p>Fig : Ecran de OMax video. Le programme proprement dit en haut &agrave; gauche et l&rsquo;&eacute;cran de rendu en bas. En haut &agrave; droite, le programme permettant la construction, le rappel et la modification en temps r&eacute;el de configurations. En bas &agrave; gauche, la r&eacute;ception de donn&eacute;es de la part du programme musical.</p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/capture_d&rsquo;&eacute;cran_2023-03-24_&agrave;_10.13.32.png\" alt=\"\" width=\"1083\" height=\"663\" /></p>\r\n<p>Fig : Herv&eacute; Sellin improvise avec les &laquo;&nbsp;Trois dames&nbsp;&raquo; Piaf, Della Casa et Billie Hollydays. OMaxVideo synchronise les extraites avec les images. Festival Manifeste 2020. <span class=\"Apple-converted-space\">&nbsp;</span></p>",
        "topics": [],
        "user": {
            "pk": 18130,
            "forum_user": {
                "id": 18124,
                "user": 18130,
                "first_name": "Georges",
                "last_name": "Bloch",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e43ebea0e8bbfd1fdcba2be956020a59?s=120&d=retro",
                "biography": "Compositeur et enseignant-chercheur, ses compositions portent essentiellement sur l’interaction avec les interprètes ou les espaces bizarres. Georges Bloch a notamment composé pour l’espace des salines d’Arc-et Senans, la fondation Beyeler, et travaillé avec des musiciens improvisateurs comme Philippe Leclerc, Hervé Sellin ou Jaap Blonk… Sa recherche avec l’équipe Représentations musicales de l’Ircam en fait un des « Omax brothers », participant à la galaxie de logiciels d’improvisation composée dont le premier représentant présent sur le forum a été Omax.\r\nGeorges Bloch enseigne à l’université de Strasbourg, où il est membre du Centre de recherches & d’expérimentation sur l’acte artistique (ITI CREAA). Il a participé au développement de la première formations française de Tonmeister (musicien-ingénieur du son) à Strasbourg puis a dirigé celle du CNSM de Paris. Il est également co-créateur du master « Ecoute critique et production de musiques actuelle » s à Strasbourg.\r\nSon intérêt pour le répertoire lyrique l’amène à travailler sur la dramaturgie musicale et son lien avec les musiques de films.",
                "date_modified": "2025-07-28T09:48:00.354175+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 887,
                        "forum_user": 18124,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "gbloch",
            "first_name": "Georges",
            "last_name": "Bloch",
            "bookmarks": []
        },
        "slug": "omaxvideo-geroges-bloch",
        "pk": 2136,
        "published": true,
        "publish_date": "2023-03-14T14:43:59+01:00"
    },
    {
        "title": "Latent Terrain: Dissecting the Latent Space of Neural Audio Autoencoders by Shuoyang Jasper Zheng",
        "description": "Exploring musical affordances of neural networks beyond their task-oriented capabilities and deriving sonic materials for musical expressions.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<p></p>\r\n<p><img alt=\"A photo of the latent terrain synth interface being used\" src=\"https://forum.ircam.fr/media/uploads/user/be2b5c94b36a32e85ac1a738e7b25b86.jpg\" width=\"970\" height=\"412\" /></p>\r\n<p>Presented by : Shuoyang Jasper Zheng</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/szheng/\" target=\"_blank\">Biography</a></p>\r\n<p>&nbsp;</p>\r\n<p>We present<span>&nbsp;</span><em>Latent Terrain</em>, an algorithmic approach to dissecting the latent space of a neural audio autoencoder into a two-dimensional plane.<span>&nbsp;</span><em>Latent Terrain</em><span>&nbsp;</span>questions the conventional paradigms of dimensionality reduction in creative interactive systems, in which the projection from high to low dimensional spaces is done by modelling similar objects with nearby points. Instead, with a mountainous and steep surface, a terrain material generated by our approach affords greater spectral complexity when navigating an audio autoencoder's latent space.</p>\r\n<p>Extending from this, we present<span>&nbsp;</span><em>Latent Terrain Synthesis</em>, which is a method for sound synthesis whereby a waveform is generated by pathing through a terrain surface. Latent terrain synthesis aims to help musicians create tailorable and flexible materials to explore musical expressions leveraging the sonic capabilities of neural audio autoencoders such as RAVE.&nbsp;</p>\r\n<p>We provide our MaxMSP externals<span>&nbsp;</span><a href=\"https://github.com/jasper-zheng/nn_terrain\"><em>nn_terrain</em></a><span>&nbsp;</span>that work together with nn~ to generate latent terrains for pre-trained RAVE models and allow users to navigate the terrain in real-time.</p>\r\n<p>In this talk, I will first present the technical details behind latent terrain, workflow, how it integrates with RAVE, and a demo interface with a tablet and a stylus. I will also present a recent user study workshop at the Centre for Digital Music at Queen Mary University of London, with co-authors Anna Xamb&oacute; Sed&oacute; and Nick Bryan-Kinns, of 18 musicians from various backgrounds exploring musical affordances and deriving sonic materials for musical expressions.&nbsp;&nbsp;</p>\r\n<p></p>\r\n<p>Acknowledgment: This work is supported by the UKRI Centre for Doctoral Training in Artificial Intelligence and Music are supported by UK Research and Innovation [grant number EP/S022694/1].</p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1745,
                "name": "nn~",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 69080,
            "forum_user": {
                "id": 69009,
                "user": 69080,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/shuoyang-zheng-1.jpg",
                "avatar_url": "/media/cache/c0/d8/c0d886223964e6fe8ff821e950f11b73.jpg",
                "biography": "Shuoyang Jasper Zheng (he/him) is a PhD student at the Centre for Digital Music (C4DM), Queen Mary University of London, supported by the UKRI Centre for Doctoral Training in AI and Music (AIM). His works explore AI systems through their convergence with media and arts, primarily focusing on the development of interactive and understandable tools that facilitate musical creations and expressions, and on the understanding of how these technological advances impact artistic practices. Besides technological perspectives, he is equally interested in the aesthetical and ethical implications inherent to the development of AI. \n\nShuoyang is also an associate lecturer at the Creative Computing Institute (CCI), University of the Arts London, leading the Mathematics and Statistics for Data Science unit. Previously, he got a BSc in Computer Science at the University of Liverpool. In 2021/22, he spent a year at CCI to pursue an MSc in Creative Computing, and wrote his thesis on real-time interface for human-AI interaction.  \n\nDuring the 2020 lockdown, he wrote songs and produced tracks under the name \"Alaska Winter\".",
                "date_modified": "2026-02-23T19:46:06.487022+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "szheng",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 128,
                    "user": 69080,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3280,
                    "user": 69080,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "latent-terrain-dissecting-the-latent-space-of-neural-audio-autoencoder-by-shuoyang-jasper-zheng",
        "pk": 3280,
        "published": true,
        "publish_date": "2025-02-12T02:26:03+01:00"
    },
    {
        "title": "Crafting Digital Success: The Role of Web Designing Companies in Delhi",
        "description": "In conclusion, for businesses in Delhi looking to make a mark in the digital space, partnering with a web designing company is a strategic investment. These companies bring a blend of expertise, creativity, and technical know-how to the table, creating websites that not only look impressive but also drive results. Whether it's a custom website design, e-commerce solution, or ongoing maintenance and support, web designing companies in Delhi play a crucial role in crafting digital success for businesses in the capital city of India.",
        "content": "<p>In the digital age, a strong online presence is crucial for businesses to succeed, and a well-designed website is often the first step towards establishing that presence. For businesses in Delhi, the capital city of India, partnering with a reputable <a href=\"https://growthleadersconsulting.com/website-design-development/\">web designing company</a> can make all the difference in creating a compelling and effective online platform.</p>\n<p>Delhi, a thriving metropolis known for its diverse business landscape, is home to companies across various industries, from IT and finance to e-commerce and healthcare. In this competitive environment, having a professionally designed website sets a business apart and helps it attract and retain customers.</p>\n<p>A web designing company in Delhi offers a range of services designed to create websites that not only look impressive but also function seamlessly and drive results. These companies employ skilled web designers, developers, and UX/UI experts who work together to create websites that are visually appealing, user-friendly, and optimized for search engines.</p>\n<p>One of the primary benefits of working with a web designing company in Delhi is their deep understanding of the local market. Delhi has a unique business landscape with diverse demographics and preferences. Companies based in Delhi leverage this knowledge to create websites that resonate with the target audience, ensuring maximum engagement and conversions.</p>\n<p>Furthermore, web designing companies in Delhi stay abreast of the latest trends and technologies in the industry. They are well-versed in responsive design, ensuring that websites are optimized for viewing on various devices, including smartphones and tablets. They also understand the importance of fast loading times, intuitive navigation, and compelling content, all of which contribute to a positive user experience.</p>\n<p><strong>The services offered by web designing companies in Delhi are comprehensive and tailored to the specific needs of businesses. Some of the key services include:</strong></p>\n<p>1. **<strong>Custom Website Design:</strong>** These companies create bespoke website designs that reflect the brand identity and values of the business. From color schemes and typography to layout and imagery, every aspect is carefully crafted to make a lasting impression on visitors.</p>\n<p>2. **<strong>Responsive Web Design</strong>:** With the increasing use of mobile devices, responsive design is crucial. Web designing companies in Delhi ensure that websites adapt and display correctly on all screen sizes, providing a seamless experience for users.</p>\n<p>3. **<strong>E-commerce Website Development</strong>:** For businesses looking to sell products or services online, web designing companies offer e-commerce solutions. These include user-friendly interfaces, secure payment gateways, and inventory management systems.</p>\n<p>4. **<strong>Search Engine Optimization (SEO)</strong>:** A well-designed website is only effective if it can be found by potential customers. Web designing companies in Delhi implement SEO best practices to improve website visibility and ranking on search engine results pages.</p>\n<p>5. **<strong>Website Maintenance and Support</strong>:** Building a website is just the beginning. These companies also offer ongoing maintenance and support services to ensure that websites remain up-to-date, secure, and optimized for performance.</p>\n<p><strong>Partnering with a web designing company in Delhi not only ensures a visually appealing website but also provides a range of benefits for businesses:</strong></p>\n<p>- **<strong>Professionalism</strong>:** A professionally designed website conveys credibility and professionalism, building trust with potential customers.<br>&nbsp;&nbsp;<br>- **<strong>Brand Identity</strong>:** The design of a website plays a significant role in shaping brand perception. A <a href=\"https://growthleadersconsulting.com/website-design-development/\">web designing company in Delhi</a> can create a website that aligns with the brand identity and values, helping businesses establish a strong online presence.<br>&nbsp;&nbsp;<br>- **<strong>User Experience</strong>:** User experience is crucial for keeping visitors engaged and encouraging them to explore further. Web designing companies focus on creating intuitive navigation and easy-to-use interfaces to enhance the user experience.</p>\n<p>- **<strong>Conversion Optimization</strong>:** An effective website is one that converts visitors into customers. Web designing companies in Delhi use strategic design elements and call-to-action placements to drive conversions and achieve business goals.</p>\n<p>In conclusion, for businesses in Delhi looking to make a mark in the digital space, partnering with a web designing company is a strategic investment. These companies bring a blend of expertise, creativity, and technical know-how to the table, creating websites that not only look impressive but also drive results. Whether it's a custom website design, e-commerce solution, or ongoing maintenance and support, web designing companies in Delhi play a crucial role in crafting digital success for businesses in the capital city of India.</p>",
        "topics": [
            {
                "id": 1870,
                "name": "Web Designing Companies in Delhi",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 65853,
            "forum_user": {
                "id": 65783,
                "user": 65853,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7387e696ad807ca13216083eb9870678?s=120&d=retro",
                "biography": "At Growthleadersconsulting, Increasing sales is the most critical task for any small business and we at Digital Growth Partner, will help you in generating the right volume of quality leads using the most cost efficient channels that will power your business.",
                "date_modified": "2024-03-05T11:22:53.392036+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "growthleadersconsult",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2801,
                    "user": 65853,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2801,
                    "user": 65853,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "crafting-digital-success-the-role-of-web-designing-companies-in-delhi",
        "pk": 2801,
        "published": true,
        "publish_date": "2024-03-05T11:21:32.683841+01:00"
    },
    {
        "title": "Sound Entities: A Practice in Spatial Composition",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p>This presentation elaborates on a framework for spatial composition centered on the generation of trajectories in periphonic and binaural reproduction environments. <br />At the outset, the question of space's role in electroacoustic composition will be discussed, with a particular focus on the desire to elevate space to equal status with the musical parameters of frequency, time, timbre, and amplitude. <br />The research will then be framed in this context, showing how such a goal may be achieved given the relationship between the nature of instrumental performance and these compositional parameters. <br />Next, the Sound Entity Creator system will be described, with a definition of the Sound Entity serving as a guide for the role trajectory plays in the compositional framework. Technical aspects of the Max for Live patches used in the system will be discussed, followed by an outline of how the attributes of a Sound Entity reveal a compositional method in which trajectory, and space more generally, can aid in determining form in a piece. Examples will be given from such a work composed by the presenter. <br />Finally, speculation on the future of the system and the role of visual elements such as virtual reality environments will be touched upon.</p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 386,
                "name": "Composition strategies",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 297,
                "name": "Electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 103,
                "name": "MaxforLive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 899,
                "name": " spatialization ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 19957,
            "forum_user": {
                "id": 19950,
                "user": 19957,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Id_Photo.png",
                "avatar_url": "/media/cache/82/17/8217e701046e5d1001a93f5185baf8fe.jpg",
                "biography": "David Schnug is a composer/researcher exploring the intersection of electroacoustic composition and spatial audio technology. This work focuses on the development of virtual instruments that spatialize sound in real time, exploring the compositional ramifications of psychoacoustics, and creating works in extended reality.",
                "date_modified": "2025-04-28T19:47:49.646333+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "davidschnug",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sound-entities-a-practice-in-spatial-composition",
        "pk": 1310,
        "published": true,
        "publish_date": "2022-09-07T14:32:05+02:00"
    },
    {
        "title": "Acousmatic massages by Vincent Isnard and Laurent Corvalán Callegos",
        "description": "Amongst the new artistic research residencies that started in 2024, one led by an atypical duo explores the concept of ‘acousmatic massage’ – a project that walks the line between artistic performance and therapeutic practice.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/ensemble_irmã_-_photo_1-1411x935.jpg\" alt=\"\" max-width=\"1411\" max-height=\"935\" /></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Vincent Isnard, Laurent Corval&aacute;n Callegos</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/VincentISNARD/\" target=\"_blank\">Biography&nbsp;Vincent Isnard</a></div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/lisnard/\" target=\"_blank\">Biography Laurent Corval&agrave;n Callegos </a></div>\r\n<div class=\"c-content__button\"><a href=\"https://www.ircam.fr/person/isabelle-viaud-delmon\" target=\"_blank\">Biography Isabelle Viaud Delmon</a></div>\r\n<div class=\"c-content__button\"><a href=\"https://ressources.ircam.fr/fr/composer/denis-dufour/biography\" target=\"_blank\">Biography Denis Dufour</a><a href=\"https://ressources.ircam.fr/fr/composer/denis-dufour/biography\" target=\"_blank\">Biography Denis Dufour</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><span>As part of this residency, a perceptual experiment will be conducted, involving approximately sixty participants (who will be divided into two even groups: the first made up of people already familiar with this kind of artistic practices, and the second of people who are new to it). The goal of this experiment is first to collect their impressions: do they consider the acousmatic massage to be a kind of music or sound performance, or a relaxation technique or a similar practice that can have a beneficial influence on their state of consciousness? Then, this experiment can help refine the protocol and identify what works and what does not depending on the sound object that is used, their trajectory within the space, the way the experiment has been introduced to the participants and in which social context, etc. It also helps to identify the relevant perceptual parameters, in view of constituting a corpus of spatialized sounds that are conducive to the creation of this kind of sound environment, by the end of their research project.</span></div>\r\n<div class=\"c-content__button\"><span></span></div>\r\n<div class=\"c-content__button\"><span><span>The term &lsquo;</span><em>massage</em><span>&rsquo; refers to the body and the way it is treated, in a therapeutic perspective or at least one of well-being (without necessarily making a connection to the well-known concept of &lsquo;</span><em>sound massages</em><span>&rsquo;), while the term &lsquo;acousmatic&rsquo; refers to its etymological sense: a sound that is heard without an originating cause being seen.</span><br /><span>With the emergence of this concept came the necessity to refine its definition and protocol and to explore all its possibilities: this is what led us to undertake this artistic research residency.</span></span></div>\r\n<div class=\"c-content__button\">Finally, a series of standardized, rigorous and scientifically precise surveys will be distributed to the participants in order to assess the extent to which this kind of massages can be considered as a therapeutic practice as well as the beneficial effects they can have on our mental states and emotional reactions.</div>\r\n<div class=\"c-content__button\"><span>This residency offers the opportunity for an interesting convergence between art and scientific research in the field of sound perception&hellip;</span></div>\r\n<div class=\"c-content__button\"><span></span></div>\r\n<div class=\"c-content__button\"><span><img src=\"/media/uploads/ensemble_irmã_-_photo_3-1411x1058.jpg\" alt=\"\" max-width=\"1411\" max-height=\"1058\" /></span></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 429,
            "forum_user": {
                "id": 429,
                "user": 429,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6ae307980c2a3ac5af0fbe3706e063f1?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-09-11T12:37:29.591061+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 43,
                        "forum_user": 429,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "VincentISNARD",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "acousmatic-massages-by-vincent-isnard-and-laurent-corvalan-callegos",
        "pk": 3338,
        "published": true,
        "publish_date": "2025-03-07T12:07:58+01:00"
    },
    {
        "title": "HEADER: a Harmonized Enveloping Audio Digital Experience Renderer",
        "description": "Abstract for HEADER project",
        "content": "<p><img src=\"/media/uploads/header.png\" alt=\"\" width=\"1212\" height=\"754\" /></p>\r\n<p></p>\r\n<p>An audio-visual interactive composition was designed and written to explore listener envelopment, defined here as how closely connected listeners felt towards the work. Aspects of user control are also examined, as well as the differences in rendering the same interactive digital signal processing on different audio sources. Thematic content of both the audio and the visuals are discussed in relation to other aspects of the design, as well as the concept of using the internet as a performance tool.</p>\r\n<p>&nbsp;</p>\r\n<p>This project was designed in Max/MSP and JavaScript and has been implemented over the web utilizing the Web Audio API. A head-tracker controls aspects of the digital signal processing, analyzing the user&rsquo;s head rotation and position captured via their webcam and using said information to control the number of voices in a harmonizer, the Dry/Wet value of a chorus effect, the sound source position in a binaural field, and the gain of the processed audio.</p>\r\n<p>&nbsp;</p>\r\n<p>Six etudes were mixed binaurally, written and recorded as one piece with content written to highlight aspects of the design and explore listener connectivity. Visuals were created using WebGL in the Jitter environment of Max/MSP, drawing from Music Information Retrieval (MIR) techniques to further listener connectivity.</p>\r\n<p>&nbsp;</p>\r\n<p>The project was reviewed by an expert panel consisting of panelists with backgrounds in digital signal processing, interactive multimedia installations, computer music, visual arts, musical listening, and face-tracking technology. Results conclusively show that listeners felt more connected with the audio than in a typical listening experience. Results also indicate that the control aspects of the design are successful, and that the digital signal processing is effective on multiple audio sources.</p>",
        "topics": [],
        "user": {
            "pk": 31258,
            "forum_user": {
                "id": 31211,
                "user": 31258,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_A3A90833A234-1.jpeg",
                "avatar_url": "/media/cache/8f/80/8f803b2e4f2fb679519dceb1cea05dd5.jpg",
                "biography": "Sam Platt is a Brooklyn based audio engineer and producer, specializing in spatial audio. Sam’s work continues to evolve with the ever-changing industry, staying on top of new technologies to build on his strong foundation of in-studio and live concert work. He is a recent graduate from NYU’s Music Technology master’s program, where he focused his research in digital signal processing, concentrating specifically on 3D audio and utilizing human-computer interactivity to better connect listeners with the music. Sam has toured domestically and internationally as both a Monitor and Front of House engineer, most notably with the actor and musician David Duchovny. Sam has worked in many New York venues, including Webster Hall and the legendary Comedy Cellar, where he was the Senior Sound Engineer. Sam has run sessions at a variety of studios across the New York area, but is primarily focused on growing his own Brooklyn based studio. Sam designed the studio with the independent musician in mind, creating an ideal environment for tracking high quality demos and overdubs without the high price.",
                "date_modified": "2022-08-26T12:28:28+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "smplatt",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "header-a-harmonized-enveloping-audio-digital-experience-renderer",
        "pk": 1297,
        "published": true,
        "publish_date": "2022-09-05T21:20:23+02:00"
    },
    {
        "title": "Mit dem Mond im Gesicht",
        "description": "Mit dem Mond im Gesicht (2019)\r\nSpatial audio short film, 27:44",
        "content": "<p>'Mit dem Mond im Gesicht' (With The Moon In The Face) tells the story of a soul travel through cosmic spheres. The longing for a moment of unity is continuously being confronted with lostness and chaos before the final return to home can take place.</p>\r\n<p>The soundtrack is a 5th order ambisonics sound-collage of piano, vocals, choir, sound-design, modular synth and field recordings.</p>\r\n<p>The spatial audio short film premiered at ZKM in Karlsruhe in July 2019. In 2021 the piece won the European QuattroPole music price.</p>\r\n<p>Music: Anina Rubin<br />Animation: Dohi Kim &amp; Anina Rubin</p>",
        "topics": [
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 910,
                "name": "field recordings",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 54,
                "name": "Piano",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 274,
                "name": "Soundart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 909,
                "name": "soundcollage",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 618,
                "name": "Spatialsound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 22,
                "name": "Voice",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 29771,
            "forum_user": {
                "id": 29743,
                "user": 29771,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Anina-Rubin_portrait_by-Marie-Capesius.png",
                "avatar_url": "/media/cache/64/7a/647a6efdcce56d23b73a5a15c256f5b1.jpg",
                "biography": "ANINALAND is a multimedia artist and musician. She grew up in Luxembourg and Germany. She studied photography at the Photoacademy in Berlin (2009 - 2012), and media and sound arts at the Karlsruhe University of Arts and Design (2016 - 2022) and hold her Master's degree for her spatial audio drama 'Landalove, Landalive'.\n\nHer focus lies in storytelling through vocals, spatial sound design and spheric moving images. She takes her inspirations from nature and spiritual practices. Her spatial audio ambisonic sound collages and compositions are a melting mix of film score, sound poetry, musique concrète, and  singer-songwriting. The musical projects are created as fixed media or in symbiosis with experimental filmmaking and live performances.\n\nHer projects have been shown internationally, such as at the Bundeskunsthalle Bonn, Tate Modern, ZKM in Karlsruhe and Yuz Museum in Shanghai.",
                "date_modified": "2023-09-13T19:36:39.760852+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 408,
                        "forum_user": 29743,
                        "date_start": "2022-09-23",
                        "date_end": "2023-09-23",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "anina",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "mit-dem-mond-im-gesicht",
        "pk": 1339,
        "published": true,
        "publish_date": "2022-09-13T15:29:00+02:00"
    },
    {
        "title": "Spin Systems in Spatial Audio - Kate Milligan, Matthew Woodham",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p>This project aims to integrate spatial audio and live performance, conceiving of ambisonic sound as an electronic extension of women&rsquo;s voices in a chamber setting. An interdisciplinary team of designers and musicians&mdash;led by Kate Milligan and Matthew Woodham&mdash;explore how naturally occurring spin systems, such as vortices and eddies, might be considered a compositional tool. Generative spin simulations are translated in real-time, and provide the framework within which the human voice is transformed.</p>\r\n<p>This project considers how traditional compositional parameters (including texture, polyphony, and rhythm, amongst others) manifest in the generative, self-organising environment. The performer&rsquo;s voices are transported away from the body and mingle fluidly in space. Experimental, iterative methodology is employed by the team to recast the role of &lsquo;composer&rsquo; in a more-than-human environment.</p>\r\n<p>Expanding on the theory of hydrofeminist scholar Astrida Neimanis, this project employs the &ldquo;logics&rdquo; of spin systems with implications for identity in performance. &ldquo;We experience ourselves less as isolated entities and more as oceanic eddies: <em>I am a singular, dynamic whorl dissolving in a complex, fluid circulation</em>. The space between ourselves and our others is at once as distant as the primeval sea, yet also closer than our own skin&rdquo; (Neimanis, Body of Water).</p>\r\n<p>(JunoCam image taken during the 1st, 3rd, &amp; 4th orbits of NASA's Juno spacecraft. Redistributed and unchanged under the Creative Commons Attribution 4.0 International License).&nbsp;</p>",
        "topics": [
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1224,
                "name": "Hydrodynamics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 473,
                "name": "Voice transformation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32942,
            "forum_user": {
                "id": 32894,
                "user": 32942,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7487736f256ff5030c0b2b59f038b0c9?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-04-06T17:51:24.023396+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "katemilligan",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "spin-systems-in-spatial-audio",
        "pk": 2149,
        "published": true,
        "publish_date": "2023-03-20T11:22:45+01:00"
    },
    {
        "title": "Audiovisual scenographies of transactions by Jānis Garančs",
        "description": "This presentation introduces an ongoing investigation into audiovisual XR architectures that translate the invisible and silent dynamics of narrative‑economic signals and contemporary algorithmic systems into tangible perceptual, spatial, and affective experiences.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/56006d94f5819ae21c0271d30b594cc9.jpg\" /><br /><br />Building on scenographic strategies developed in earlier audiovisual works&mdash;such as <strong>Ephemeral Value Sensoriums</strong> and <strong>Rhapsodic Statistics</strong> (which employ live market data), as well as the more abstract <strong>confluxus [+][&times;] corner portals </strong>(which transform a gallery&rsquo;s 90‑degree corner into a simulated zone of transition and confluence)&mdash;the research continues to explore how financial transactions, market microstructures, and AI‑driven processes can be staged as living, rhapsodic structures.</p>\r\n<p>The modular system architecture used in these projects processes real-time multivariate time‑series data from cryptocurrency exchanges, mapping market depth, liquidity imbalances, and volatility signatures into both visual and sonic forms. These include dynamic graphs, crowd simulations, and high‑fidelity natural‑element simulations&mdash;fire, smoke, clouds, and terrain&mdash;alongside spatial soundscapes in which data modulates harmony, timbre, and texture. AI‑generated voices occasionally vocalise number sequences, while AI‑derived samples and instrument‑separated stems convey broader shifts in market &ldquo;mood\".&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1be9c0a6b3a0938205fefbfc68b86f39.jpg\" /></p>\r\n<p>Sonically, the installation treats audio not as accompaniment but as a <strong>structural force</strong>. In both installation and performance formats the setup creates conditions&nbsp; where sound and image continuously negotiate their roles: <strong>mimicking</strong>, <strong>contradicting</strong>, <strong>replacing</strong>, or <strong>translating</strong> one another. This interplay produces a perceptual field in which statistical fluctuations become audible forces, algorithmic mutations acquire timbral identities, and transactional flows manifest as shifting sonic topologies.<br /><br />References:<br />Jānis Garančs. 2025. <a href=\"https://dl.acm.org/doi/10.1145/3769534.3769599\">Bridging Immersive Analytics and Affect: Audiovisual XR Sceneries of Financial Transactions</a>. <br />In Proceedings of the 18th International Symposium on Visual Information Communication and Interaction (VINCI '25). <br />Association for Computing Machinery, New York, NY, USA, Article 54, 1&ndash;5. https://doi.org/10.1145/3769534.3769599<br /><br /><a href=\"https://rixc.org/en/home___/0/confluxus-corner-portals-by-janis-garancs-at-the-rixc-gallery/\">Confluxus [+][&times;] corner portals by Jānis Garančs at the RIXC Gallery</a><br /><br /><br /></p>",
        "topics": [],
        "user": {
            "pk": 126732,
            "forum_user": {
                "id": 126565,
                "user": 126732,
                "first_name": "Jānis",
                "last_name": "Garančs",
                "avatar": "https://forum.ircam.fr/media/avatars/Garancs.jpg",
                "avatar_url": "/media/cache/c2/5a/c25aa43d137609ed0737523f55889dd4.jpg",
                "biography": "With a foundation in classical fine arts and music in Riga, Latvia, Jānis Garančs went on to specialise in video and computer art at the Royal Institute of Art (KKH) in Stockholm, Sweden, and digital audiovisual media at the Academy of Media Arts (KHM) in Cologne, Germany.\n\nSince 2000, his creative practice has focused on interactive multimedia installations, virtual and extended reality (VR/XR), and audiovisual performances. His work has been showcased at international festivals and conferences, including Ars Electronica, ISEA, Transmediale, and RIXC Art and Science Festival. He has received several artist-in-residence grants, such as those from SAT (Montreal), V2_Lab (Rotterdam), and EFFEA (European Festivals Fund for Emerging Artists).\n\nGarančs is a co-founder and board member of RIXC — the Riga Center for New Media Culture. Currently he is a PhD candidate at RTU Liepāja, part of Riga Technical University in Latvia, and a visiting researcher at Aalto Studios / Aalto University in Finland.",
                "date_modified": "2026-03-18T03:46:02.980287+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1215,
                        "forum_user": 126565,
                        "date_start": "2025-10-07",
                        "date_end": "2026-10-07",
                        "type": 0,
                        "keys": [
                            {
                                "id": 1085,
                                "membership": 1215
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "garancs",
            "first_name": "Jānis",
            "last_name": "Garančs",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3593,
                    "user": 126732,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "audiovisual-scenographies-of-transactions-by-janis-garancs",
        "pk": 4334,
        "published": true,
        "publish_date": "2026-02-09T15:56:01+01:00"
    },
    {
        "title": "Tutoriel Modalys n°1 : The Plucked String Radiation",
        "description": "Première partie d'une série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<h5>Ce premier tutoriel se concentre sur l'exemple simple d'une corde pinc&eacute;e.</h5>\r\n<p></p>\r\n<p><span style=\"font-size: 1.125rem;\">Je commence par l'explication de ce process, puis je le tape dans Modalisp avec des explications d&eacute;taill&eacute;es.</span></p>\r\n<p><span style=\"font-size: 1.125rem;\">Apr&egrave;s &ccedil;a, je reconstruis le script Modalisp dans OpenMusic et enfin Max.&nbsp;</span></p>\r\n<p>Vous trouverez&nbsp;une liste de signet dans la description vid&eacute;o de&nbsp;<a href=\"//https://www.youtube.com/watch?v=__Xda1W5ZwY&amp;feature=youtu.be\">YouTube</a>.</p>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><video width=\"300\" height=\"150\" controls=\"controls\">\r\n<source src=\"/media/uploads/uploads/media/modalys_01_-_the_plucked_string_radiation_-_modalisp_openmusic_max.mp4\" type=\"video/mp4\" /></video></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: justify;\">M&ecirc;me si&nbsp;l'unidimensionnalit&eacute; de la connexion &agrave; pinces, une simple monocorde et une monocorde &agrave; deux masses auraient fait l'affaire, j'ai d&eacute;cid&eacute; de passer imm&eacute;diatement aux objets \"bi\". Cependant, si vous aviez un patch &agrave; pincement uniquement sur Max, vous pourriez &eacute;conomiser une bonne partie du processeur en utilisant uniquement des <strong>objets mono-directionnels.</strong></p>\r\n<h6 style=\"text-align: justify;\"><strong></strong></h6>\r\n<p style=\"text-align: justify;\"><b>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; pa</b><strong>r&nbsp;<span class=\"\">Olav Lervik.&nbsp;</span></strong></p>",
        "topics": [
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 277,
                "name": "Max 8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 79,
                "name": "Max8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-01-the-plucked-string-radiation-modalispopenmusicmax",
        "pk": 722,
        "published": true,
        "publish_date": "2020-07-21T11:00:48+02:00"
    },
    {
        "title": "Micro Timing in Polytemporal Scores and Multilayered Textures by Simon Kanzler",
        "description": "The aim of this presentation is to summarize the work I did during my artistic research residency at IRCAM. I explored the potential of polytemporal scores through different methods of computer-aided composition. I also explored how principles of desynchronization can be used for realtime control of immersive textures. I focused on the idea of micro timing, working with the superposition of closely related tempi that results in varying phase differences between beats. The resulting tension can be used as a compositional strategy. I specifically looked at coupled-oscillator networks and other coupled dynamic systems in order to see how these mathematical models of synchronization can be applied musically, for example as control structures in granular sound synthesis.",
        "content": "<p><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p><strong>Two Research Questions regarding Polytemporal Music:</strong></p>\r\n<p><em>1. Distant Tempi - Moment Form</em></p>\r\n<p>Can multiple tempo layers be perceived simultaneously and under what conditions? Can the spatial separation of instruments facilitate the perception. How can polytempo structures be used to create musical form? Here, I was interested in nonlinear form&mdash;mosaic structures&mdash;that are created by both sequencing and layering of &ldquo;musical moments&rdquo; as opposed to linearily evolving musical forms. For this, I wanted to use distant tempo relationships&mdash;for example a very slow tempo against a much faster tempo&mdash;and examine if these tempi can be perceived more easily under this condition.</p>\r\n<p><em>2. Close Tempi - Process Form - Coupled-Oscillator Networks</em></p>\r\n<p>What if, I use close tempo relationships instead, where the differences between tempi are a matter of phase differences. How can this tension be used musically and dramaturgically in a form? Here, I was curious to use coupled-oscillator networks, which are dynamic models of synchronization such as the <em>Kuramoto model </em>that describe the spontaneous synchronization of biological oscillators. These seemed interesting because they describe a process in which the phase and period differences between oscillators are adjusted and cause a gradual transformation from a desynchronized to a synchronized state. To me, this is a rhythmical analogy of a dissonant harmony that is resolved to a consonant one, thus creating tension and release. Translated to tempo relationships, this means to adjust the tempo for each voice measure by measure, causing it to constantly change and creating a linear form process of gradual synchronization.</p>\r\n<p><strong>Methods:</strong></p>\r\n<p>I explored both these ideas by implementing them in my LISP environment in Max. First, I programmed functions that I later used to create the tempo structure of a score. This score is graphically displayed using the Bach library, that already has the possiblility to display polytempo scores correctly.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e35bbfe6c269825ba04bab17032c17fa.png\" /></p>\r\n<p>I used two strategies corresponding to research question 1 and 2.</p>\r\n<p>1. I created tempo relationships using harmonic ratios with the option to change them moment to moment&mdash;section by section&mdash;and using pivot tempi to connect these sections, thus creating a sort of nonlinear &ldquo;tempo mosaic&rdquo;. I tested these with a percussion sampler and surround spatialization in Studio D to see how space could effect the perception. It was immediately clear how important it is to create a very unique timbral space for each voice and tempo layer, in order to separate them perceptually. However, I also discovered interesting spatial effects resulting from using the exact same timbre for each voice.</p>\r\n<p>2. I implemented the <em>Kuramoto model</em>, other synchronization models such as the <em>algorithmic </em><em>time-keeper model </em>and my own algorithm inspired by the Kuramoto model. These models return phase or frequency values and I translated them into BPM in order to create scores with the Bach library in Max. I implemented these BPM values in the score measure by measure, in discrete steps. I discovered an unexpected pattern using the Kuramoto model. Even though the voices started to synchronize to each other as expected, they never ended up completely aligned with a common barline. Instead each voice oscillated between two or three BPM values with extremely small deviations. This finding of the nature of the Kuramoto model was interesting but also meant that it was not useful to me for automatic score generation because I rely on accurate synchronization points that create common barlines. Instead, I focused on models such as the <em>algorithmic time-keeper model </em>and the <em>circle </em><em>map phase oscillator model </em>. These models can be used to achieve a very similar musical result even though the algorithm and purpose of these models are different. The Kuramoto is an abstract model that aims to describe phenomena in nature such as synchronous chorusing in animal populations. It works with coupled-oscillator networks to describe self-organized behaviour in large populations. Self-organization means that all entities listen and synchronize to all other entities simultanously without needing a leader. There are, on the other hand, synchronization models aimed at describing rhythm perception and coordination. They examine how humans are able to synchronize to an external impulse, such as a clicktrack or a musician. These models work with a stimulus and a response pulse or oscillation. When working with a group of people responding to the stimulus, this impulse acts as a group leader or &ldquo;master clock&rdquo;. In order to achieve a similar musical result that is achieved with the Kuramoto model, I created many response pulses&mdash;voices in a score with independent tempo and BPM markings&mdash;instead of just one, all of them with their own tempo and all of them &ldquo;listening&rdquo; to the master clock. This approach has several advantages when applied to a composition. Firstly, it is possible to control the target tempo by setting the master clock tempo. All the voices will eventually synchronize to that tempo. Secondly, it makes the synchronization of acoustic instruments with electronic voices possible when they both share the same master clock. After creating the temporal structure, I created LISP functions with the role of &ldquo;populating&rdquo; the tempo/measure structure with musical material and transforming this material.&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4f370b80c1dafa3a9184f3bbcf245abc.png\" /></p>\r\n<p><strong><em>Pulse2Texture</em> System for Realtime Control of Multilayered Textures:</strong></p>\r\n<p>In parallel to this score-based non-realtime approach, I started to explore possible applications of the synchronization models for realtime control of electronic sounds. Previous research in this area has been done by Nolan Lem, who explored coupled-oscillator networks to generate sound in various ways, through sound synthesis or rhythmic generation. However, his artistic work has explored these systems mainly in audio-visual installations. In my own research, I want to focus on applications in electroacoustic mixed-media settings and explore interactivity between musicians and electronics. For this reason, I have focused for now on models that describe rhythm perception and coordination such as the <em>circle map phase oscillator model</em>, and not models that describe self-organized behaviour such as the Kuramoto model. As mentioned above, the advantage of those models is that they work with a stimulus or &ldquo;master clock&rdquo; and this can ensure the synchronization between musicians and electronics. Musically, I am working with sound masses, many independent agents with their own tempo but all of them listening to the master clock. I control the degree of synchronization or desynchronization between the agents and the master clock by adjusting the coupling strength. As a musical result, I can morph between a completely desynchronized state that resembles a swarm-like granular texture and a synchronized state in which a pulse and rhythmic patterns with perceptible tempo emerge. These beat-based patterns can become especially interesting when thinking about the synchronization with musicians. They can help to create a feeling of &ldquo;groove&rdquo; that feels alive. Not all of the agents will start to synchronize at the same time, since the synchronization depends both on the coupling strength and their start tempo. While most of them will synchronize when the coupling strength is strong enough, there will often be a few that don&rsquo;t. This behaviour results in a sound that is less machine-like and more natural and human. I could also achieve very interesting results when I experimented with the coupling strength to achieve a more or less loose or tight groove. As a start, I have implemented the <em>circle map phase oscillator model. </em>First, I used the new Javascript tools in Max 9. This enabled me to experiment with the model directly within Max. Second, I started to explore the Antescofo language. I worked together with <strong>Jean-Louis Giavitto</strong> on an implementation of the model. In this process the concept of actors in Antescofo was particularily helpful. Actors focus on the management of concurrent activities of autonomous entities, thus lend itself perfectly for the control of systems with a large number of parts. For now, I have used the model as a control structure for triggering samples within Max.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4c5fc8101ea5c8b8fea4ef6974f4b77a.png\" /></p>\r\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 11196,
            "forum_user": {
                "id": 11193,
                "user": 11196,
                "first_name": "Simon",
                "last_name": "Kanzler",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/635023a21be0d28a17b64e8210992a50?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-20T02:55:40.987817+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 312,
                        "forum_user": 11193,
                        "date_start": "2026-01-19",
                        "date_end": "2027-01-19",
                        "type": 0,
                        "keys": [
                            {
                                "id": 1033,
                                "membership": 312
                            },
                            {
                                "id": 1034,
                                "membership": 312
                            },
                            {
                                "id": 1035,
                                "membership": 312
                            },
                            {
                                "id": 1036,
                                "membership": 312
                            },
                            {
                                "id": 1072,
                                "membership": 312
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "simonkanzler",
            "first_name": "Simon",
            "last_name": "Kanzler",
            "bookmarks": []
        },
        "slug": "micro-timing-in-polytemporal-scores-and-multi-layered-textures",
        "pk": 4330,
        "published": true,
        "publish_date": "2026-02-09T01:36:45+01:00"
    },
    {
        "title": "Tutorial: Neural Synthesis in Max 8 with RAVE",
        "description": "Learn to perform neural audio synthesis inside Max 8 using nn~.",
        "content": "<p>Are you looking to investigate a little deeper neural audio synthesis within patching environments? This tutorial is made for you!</p>\r\n<h1>Video Tutorial</h1>\r\n<p><iframe width=\"425\" height=\"350\" src=\"//www.youtube.com/embed/Dy1WTc022rQ\"></iframe></p>\r\n<p></p>\r\n<h1>Installation</h1>\r\n<p>We will use <a href=\"https://forum.ircam.fr/projects/detail/nn/\">nn~</a> to interface neural audio synthesis models with both Max and Pure Data. Then be sure to collect the files corresponding your platform on the <a href=\"https://github.com/acids-ircam/nn_tilde/releases\">last release</a>!</p>\r\n<h3>Install nn~</h3>\r\n<h4>For Max 8</h4>\r\n<p>Just unarchive the <code>nn_max_msp_OS_ARCH.tar.gz</code> archive, and place the folder in the <code>Packages</code> folder of your <code>Max 8</code> folder. You can place the folder to another place, but do not forget to add the location in Max's File Preferences!</p>\r\n<h4>For PureData</h4>\r\n<p>Just unarchive the <code>nn_max_msp_OS_ARCH.tar.gz</code> archive, and place the folder in the <code>externals</code> folder of your <code>Pd</code> folder. Do not forget to remove the quarantine of MacOS by lauching within terminal :</p>\r\n<pre><code>cd /path/to/nn/folder % replace by the location of your external!\r\nxattr -r -d com.apple.quarantine .\r\n</code></pre>\r\n<h3>Download a model</h3>\r\n<p>Do not forget that nn~ is only a bridge between patching environements and neural synthesis models, such that you will need to download a nn~-compatible model. nn~ is compatible so far with <a href=\"https://forum.ircam.fr/projects/detail/RAVE/\">RAVE</a> and <a href=\"https://forum.ircam.fr/projects/detail/vschaos2/\">vschaos2</a>, so go on the corresponding pages to fetch the models you wan to try out.</p>\r\n<p><strong>Important</strong> : in Max, your models have to be accessible within Max file preferences, so be sure to put them in an appropriate location. In PureData, the models must be in the same folder as the external.</p>\r\n<h1>Generating audio with RAVE</h1>\r\n<p><code>nn~</code> installed? Models downloaded? We are now ready to play a little bit with <code>nn~</code>.</p>\r\n<h2>Audio transformation with forward</h2>\r\n<p>The most straightforward way of generating sound with RAVE through nn~ is the <code>forward</code> function. Arguments for <code>nn~</code> are :</p>\r\n<pre><code>nn~ MODEL_NAME [METHOD_NAME] [BUFFER_SIZE]\r\n</code></pre>\r\n<p>where <code>MODEL_NAME</code> is the name of the model (for example, <code>vintage.ts</code>), <code>METHOD_NAME</code> the name of the method (<code>forward</code> by default), and <code>BUFFER_SIZE</code> the inner buffer size used by <code>nn~</code> to transform the sound (takes smallest by default). To use your model as an audio effect, just plug an audio input, and audio output :</p>\r\n<p>and, that's all! You should be able to perform neural transformation of your incoming sound 🎶</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/eb6dea3dcdeb99e079a8fcebacc82181.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><strong>Tip</strong> : you can disable the internal DSP of an <code>nn~</code> box by sending it the <code>enable 0</code> / <code>enable 1</code> message. Very convenient to save some DSP!</p>\r\n<h2>Latent manipulations with encode &amp; decode</h2>\r\n<p>That was too easy, so let's make things a little more difficult. Both RAVE and vschaos2 are auto-encoders, meaning that they take sound as an input, generate sound as an output, and are trained to reconstruct the incoming sounds of the dataset. This processing is based on two separate processes :</p>\r\n<ul>\r\n<li>an <strong>encoding</strong> process, where a given window of incoming audio (let say 2048 samples) is transformed into a set a <em>latent</em> variables (128 parameters in general)</li>\r\n<li>and a <strong>decoding</strong> process, that inverts these 128 latent variables back into sound.</li>\r\n</ul>\r\n<p>The <code>forward</code> function is actually just the chaining of these two functions. With nn~, you can access these two functions separately with the <code>encode</code> and <code>decode</code> functions.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9e28efc247a3623939b0e69db7ac2f18.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Each output of the <code>encode</code> corresponds to a latent dimension of the input audio, such that you can access each latent parameter separately. In <code>vschaos2</code> all the latent are given to user, while in <code>RAVE</code> only a subpspace of these latent dimensions can be controlled, depending on the true latent space morphology (see video for more information).</p>\r\n<p>Accordingly, the <code>decode</code> function has a number of inputs amounting to the number of latent dimensions (+ conditioning entries for <code>vschaos2</code>). Hence, connecting every thread from encode to decode comes back to the <code>forward</code> function ; however, we can now access individual latent to perform some transformations over the latent space. Here's a great aspect of auto-encoding architectures : having a full spectrum between an <em>audio effect</em>, where all the latents coming from an audio input are given to the decoder, and a <em>synthesizer</em>, where all the latents are directly controlled by the user. By example, we can mix encoding, controlled and automatized latents with the following patch :</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/744837891b7b814f2cdad8b9decbdc09.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>You can find the code below, in example 1 section. Here, we enter the four first latent dimensions from the audio encoder, but manipulate the 5th and 7th dimension to a controllable slider, and the 6th and 8th dimension to a parametrized LFO. Latent dimensions are sampled from an isotropic normal distribution during training, such that most information is usually lying between -3 and 3 ; however, this may depend on your model, so do not hesitate to adapt it through exploration.</p>\r\n<p>RAVE usually sorts dimensions by their impact on the output sound, such that this system allows to be reactive enough to the input audio, but still controllable to make it also sensitive to user input. Such hybrid conditioning of the decoder then allows endless sound shaping through neural synthesis ; do not hesitate to try out any idea you can have!</p>\r\n<h2>Multi-channel functionnalities (Max 8 only)</h2>\r\n<p>Now, let's investigate a little deepers the multi-channel functionalities of <code>nn~</code> with the <code>mc.nn~</code> and <code>mcs.nn~</code> objects. These objects are very convenient for both patching and sparing some CPU load, so do not hesitate to integrate them in your workflow!</p>\r\n<h3>Batched transformation of sounds</h3>\r\n<p>Imagine you want to decode several sounds at the same time : a genuine approach would be to duplicate the <code>nn~</code> box for each sound, as in the following image :</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4ea2821cd4ca33b020a710169f86cf9f.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>However, besides the fastidiousness of repetitive patching, this strategy is also dramatically inefficient in terms of CPU cost : indeed, the model is there copied 4 times in the RAM, and the processing load is multiplied by four. If your computer is not a racing horse, this can provoke CPU overload and glitchy audio clicks.</p>\r\n<p>Fortunately, <code>mc.nn~</code> is there to save us! <code>mc.nn~</code> uses the multi-channel feature of Max 8 to perform <em>batch processing</em> of sounds, meaning that it can process several inputs at the same time using a single model. Furthermore, depending on your architecture, the model may paraellize these processes in an efficient way : minimum CPU cost, and a single model in RAM. No problems anymore! To do this, just gather your sounds with an <code>mc.pack~</code> module, and send it to a <code>mc.nn~</code> instance.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2eb28feb73b8bfaa4be865a8bd4933f5.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h3>Latent manipulation with mc.nn~ and mcs.nn~</h3>\r\n<p>In addition to spare time and CPU load, multi-channel can also be used to efficiently perform latent operations with <code>mcs.nn~</code> ; but, first, let's see the difference between <code>mc.nn~</code> and <code>mcs.nn~</code>.</p>\r\n<ul>\r\n<li><code>mc.nn~</code> will have the same amount of input / outputs than its <code>nn~</code> counterpart, and automatically adapt to the lowest number of channels of its inputs. For example : if every incoming inputs has 4 channels, the outputs will be 4 channels ; though, if a single one has 3 channels, outputs will have 3 channels.</li>\r\n<li><code>mcs.nn~</code> takes every inputs/outputs of a single instance in one input, such that the number of batches must be declared at initialization. Imagine a <code>decode</code> function with 8 inputs ; a <code>mcs.nn~ isis forward 1</code> will then have one input, requiring 8 channels. To process 4 inputs at a same time, you will need a <code>mcs.nn~ isis forward 4</code> with 4 inputs, each one requiring 8 channels.</li>\r\n</ul>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/af0c3401caf71b1b2e753a69c00fb7fd.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>In the example below, both <code>mc.nn~</code> and <code>mcs.nn~</code> are used to decode 4 sounds at the same time. These two objects are absolutely equivalent in terms of performance, and are just different to allow different uses. <code>mcs.nn~</code>, by example, is very convenient to perform batch operations on the latent dimensions of different sounds, as in the little patch below.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/34d03b6eadd3830e726e824e2ae50d0c.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>In this example, we added in a very simple way some latent noise to two different batches. <code>mcs.nn~</code> then allows to manipulate all the latents at the same time, but to perform different operations for each batch, as the threads are separated (an operation that would have been very tedious in <code>mc.nn~</code>). You can find the code below, in example 2 section.</p>\r\n<p>Well, that's it! Do not hesitate to ask your questions in the <a href=\"https://discussion.forum.ircam.fr/c/rave-vst/651\">RAVE VST Forum</a>.</p>\r\n<h1>Compressed patches</h1>\r\n<h3>Example 1</h3>\r\n<pre><code>&lt;pre&gt;&lt;code&gt;\r\n----------begin_max5_patcher----------\r\n1122.3oc0XErjahCD8r8WAEG2xwFIP.dNmTU1C6w8zVolRFzXqLBIBRj3IoR\r\n91WIAXSBXO3wxYpbvXnkDO8590Rs3aym4uQrmH88ty6+7lM6aymMyZxXXV6y\r\ny7Kv6yXXosa9YhhBBW4unoMEYuxZOQsyKmVzYmlasJ17w2.AcF40ETNinruI\r\nvQihZUm0fVqkXU1NJe68UjLUy7KAEuDsva85kAK7P.yUXvx.uOXFw2mO2bYw\r\nUxBzoYQfaXQbXxMmEX8b4n4xJhT2KrhJ32ynbRlsUcGgCoIXsanIJFYoVB5F\r\nxy2+0QBTfTGwffnkQvTTBn8ZjNrACswsvvqhObxWzyzAzQR29iwHTzEPHvYx\r\neRf14NHn4OKEfFJzNllWi5oRRy.LSHNl46TNlgYDufkd.Xh2aBW5EtbLJiFk\r\nxwuPJCSrQMfNh9rb9kwVIilSpFiIwGRCwU3BhhTcOgi2vH8keNIvBfnNUo9g\r\nn.2yxKT2F5FcaLJ7OFcKzM51VJ+ZpaA2dcaKKe0zs+0np1w2mGdojKBAGt8Q\r\nqFNI41pgefIzyyQH2ZGETgmYaSMEGtsYhUAiBFv6GDUEXU+jjgw7Ed9av7sN\r\n1Wjd68EipAfgouV9hSjGj8TFiLVtP7MLU3zRBmlJP9ZNNaLtgt.tclpDgqiM\r\nDHBX2BNB0c0kwGN+GdTIU5kqKcOmLBYFuBwzKMP0Rlvlh1aEp+12p8.cI7SP\r\n2KovhzmktfT3kR2EN4tWjSqjgehQkpCZZQEcKU+5TjhRg0OX2Ic3AXmx5cJZ\r\nAQppHZeUCiO5KJoZiYhJi+SeRxmcUxGDLl3KaYhM8laci3S0XFU8jcxsAKoY\r\n9iF7P8WYjqFG8yDYCsBYPyhLg8qrvBpplSMqkdmWTT2aqvH4FLstPgf4WN0N\r\nSwUZV9geIVwH7spNOrYtoc7YOJO1wbrB2JBZUAy7yXzxCepldMnaBuQJX0Jh\r\n1UXeq9+CNixUB4Nu2+16V8uRRkbUtn3QZ0ijUuU+xkZO7pbLs39Bk8evp2sW\r\nUgyzl+6hMjLJirJaWkn.WtSvIKwzGZcF1.htYtVE0r+wY61iTdiHDWmSEFK8\r\n5Pq9rdc35f.X5g5mMsIIriAaieJDfRRWGkDFmFAAfEZSs2Gffw5VS67elgyD\r\nhxd5QqGTvUDt5doBqHct29NxdipyjM8r4ll70VKsosVD8MeUke4qnYUkF6+b\r\ntrTTWk0Ih5Nceurrbc5mVgbj1cqu6EehkKlLPwS.HyAu8BtVjPSAoHWfT3Dc\r\ndQWKPvoPoPWPIvTPB5BjBlnyKo25WlSKZV682CxngHCtJjWOEmqY1AtRma7T\r\nA5ZihoS.nXGfiQaGNgPVnCvANAbfN.GvDvA3.blhT+ZiOSYaDzOwYGkLOUfC\r\nbct7TVRd3JxMaXiKK+rtln1NagPWl8GEUGNxfutj1lGsGF2uh7Yprekm93Jc\r\nYmJcIJ0UM0AsOt4bX1ZHq30zVNpI2bS4a5R3M0LIKwYskXpqze92m++.u94U\r\nJ\r\n-----------end_max5_patcher-----------\r\n&lt;/code&gt;&lt;/pre&gt;\r\n</code></pre>\r\n<h3>Example 2</h3>\r\n<pre><code>&lt;pre&gt;&lt;code&gt;\r\n----------begin_max5_patcher----------\r\n1062.3oc6X0zaaiCD8r8uBAcrqqsnjohbO0Cs.6hE6wcuTTDPSyXyFJRURpV\r\nGTT+ae4GRxxIRtx01In.8PBMGRwYduY3Liz2FOJboXKQEF7lfODLZz2FOZjS\r\njUvnp4iByQawLjxssPEVTP1ENwuDckSnX4mdc7hZg7xbQolQztmHpRZARi2P\r\n4quURvZuFSffoQSpG.IQMCAeb+YQ40GUrU12GO19uICzdwh7bBWWaaZxVmxC\r\nYHsQb.WPUj.jwf2um1fJ6jAELKyAi49AfaHtWLA9IvDm7Uiw8DHkim9G6Bhl\r\n1ENR6FGfJodQ5GJHdPDlWxzT7FDmSXJ5ZNhE1X+c4IAPGLSuwN.8Shuj9w9w\r\nrhTfjFu4tfrtPNrajGeBHexIyGvHWHcrmVVDeb93mIFnfgdfQU5lqhBIcM0X\r\nWLBesdi2LbWnBMF98p8laaxYdXCFjnbhlHukvQKYj1A4GRbvGoOMIuP3PgMH\r\nuOVcOS10ur+shh0TAGIeXuohYzhMD55MNZMA1b94hUtSNbIRQwMffZbDXgz5\r\nGLGUaLbmfwDecMSrrkEG8CCnAySbC23hqSRa4AuSHyQbc2pyYV5Rt4PshmOu\r\ndgOWhXT8CO01OLfnNLmlSTZIwXdUNzF0uBoQUgGUwGd5pIYdqELKgVpDrRs4\r\nphO1H7ePXJWKTaB9y28lY+qhHUyVIxumJumL6clCWYrkYqPz7ay0tQvr2uUK\r\nQXi3+JeIASYjY+sIzZ5+YdVRxTD8tJv3nGyxbSHkSYGea2S49PRT4JpvJo0F\r\nphVKiSVDEcyBHn0ZLgnnEc4n.A2lW+Vk1jQnleZyDsdpZQtad9eD7ww6kbh2\r\nH6uxX7IWDINJ8pWYr2Lppob9t.phpBVQvlaZAwcgJvKQh0ZhIxyHwYW9JM2w\r\nDlynKDGc5YK6kMrXeIxld3XkQRR8Yg7kUidBXcIgbaN852bwq5o4BP10s4BX\r\njq9IHyQFvjmylK5qgJvMWWLWEm+hzPkqm3cAu0Zsp.PZWvO84wkC.C5ZN3Jk\r\n9iv6M8G.9Bl9CrH4pk96hzXI3WqFKSe9ZrrxCtv+9AdG3u6q7w8Uh2HE4nhM\r\nBN4X8Ud7sMn9JShyhhfKZpg870WoixCYT9i+.LtnFq7CukpDkRbcXecGXAf8\r\ndNiK0buoNj4C6+rLV2amYBFrlfCQSoGrob5pBgITPU+YQRlBMQ7vXeDeTyry\r\n01R8.731VEUcVZxVyeXZ5rwDXHZJ9BnIvf8rWjXneDlbTb+wP.eTi+kxiMub\r\nT8rykExFBeCuD78PhVqMmyiumODMAuD2KFrm8rYuAioyl8hFpe5P1ymTGUT7\r\nESgupc6ThoIqOIj1oYSbSob+T2KtEJIegpZWrODIMsNnMs.TJ8E61l5apx0i\r\nhjWRqJ1Xf2XaMZSCb1BipBDtpEFSedi+93+Wpifq6\r\n-----------end_max5_patcher-----------\r\n&lt;/code&gt;&lt;/pre&gt;\r\n</code></pre>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -41px; top: -18.5px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 674,
                "name": "neural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20182,
            "forum_user": {
                "id": 20174,
                "user": 20182,
                "first_name": "Axel",
                "last_name": "Chemla-Romeu-Santos",
                "avatar": "https://forum.ircam.fr/media/avatars/Photo.jpg",
                "avatar_url": "/media/cache/f7/78/f778be374ea22ae4fcea1834f753924b.jpg",
                "biography": "Based in Paris, France, Axel Chemla—Romeu-Santos works a researcher, composer, and performer in various fields such as music, theater, and artificial intelligence. After a double undergraduate degree in Engineering Sciences & Music Theory, he specialized in acoustics and computer music at IRCAM. Always looking for creativity through technology, he initiated a PhD between IRCAM (Paris) and LIM (Milano) on the creative uses of generative artificial intelligence for sound synthesis. After graduation, he continued a research & creation approach to artificial intelligence, working both on scientific papers on AI creativity, and experimental musical pieces exploring diverse aspects of these technologies (such as network bending, real-time improvisation, and composition). \nBesides institutional works, he also work as musician and composer for the company Théâtre de la Suspension, is co-founder of the w.lfg.ng collective, member of the maximalist electronic music band Daim™, and has his personal project Kenoma.",
                "date_modified": "2025-10-21T19:56:31.408648+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 626,
                        "forum_user": 20174,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-18",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "chemla",
            "first_name": "Axel",
            "last_name": "Chemla-Romeu-Santos",
            "bookmarks": []
        },
        "slug": "tutorial-neural-synthesis-in-max-8-with-rave",
        "pk": 2871,
        "published": true,
        "publish_date": "2024-03-20T12:38:02+01:00"
    },
    {
        "title": "Moving Towards Synchrony: A Brainwave to Music Translation System",
        "description": "Moving Towards Synchrony: A Brainwave to Music Translation System is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.",
        "content": "<p><em><i class=\"\"><b class=\"\"></b></i></em><strong>Introduction:</strong></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>My name is Johnny Tomasiello and I am a multidisciplinary artist and composer- researcher, living and working in New York.</span></p>\r\n<p><strong>Moving Towards Synchrony: a Brainwave to Music Translation System</strong><span>, is an immersive work whose primary purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.</span></p>\r\n<p><span>It investigates the neurological effects of modulating brainwaves and their corresponding physiological processes through neuro- and bidirectional feedback through use of a Brain-Computer Music Interface (or BCMI). The BCMI allows for the sonification of the data captured by an electroencephalogram, effectively using the subject&rsquo;s brainwaves to produce real-time interactive soundscapes that, being simultaneously experienced by the subject, have the ability to alter her or his physiological responses.</span></p>\r\n<p><span>This work can be presented as an interactive computer-assisted compositional performance system, and I have staged performances with it to that end.<br />But its original intent is to directly engage the audience, inviting others to use the system, which could teach them how to affect a positive change in their own physiology by learning to influence the functions of the autonomic nervous system.</span></p>\r\n<p><span>While developing the project, I was concerned with maintaining a balance between the mindfulness the experience was meant to inspire, and the meaningfulness of the result. The work demands active engagement from listener, if they are participating directly, and is concerned with staying in the process. This represents, for me, &ldquo;...a move away from making objects to making processes&rdquo; [1], as well as a move away from the subjective, where the process, the experience, and the quantitative and qualitative analysis of those things, are more significant than anything that&rsquo;s produced as a result.</span></p>\r\n<p><span>In addition to investigating these neuroscience concerns, this project is designed to explore the validity of using the scientific method as an artistic process. The methodology will be to create an evidence-based system for the purpose of developing research-based projects.</span></p>\r\n<p><span>As Gita Sarabhai expressed to John Cage &ldquo;Music conditions one&rsquo;s mind, leading to &lsquo;moments in [one&rsquo;s] life that are complete and fulfilled&rsquo;.&rdquo; [2]. Music, in this case, can also be used by the mind to condition one&rsquo;s body.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 2\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><img src=\"/media/uploads/tomasiello-modularsquare_001.png\" alt=\"\" width=\"960\" height=\"960\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><strong>Basis:</strong></p>\r\n<p><span>The research methodology explores how to collect and quantify physiological data through non-invasive neuroimaging. The melodic and rhythmic content are derived from, and constantly influenced by, the subject&rsquo;s brainwave readings. A subject, focusing on the musical stimuli, will attempt to elicit a change in their physiological systems through the experience of the bi-directional feedback system.</span></p>\r\n<p><span>The resulting physiological responses will be recorded and, along with the results of other subject&rsquo;s data sets, quantitative analysis performed, to determine the efficacy of using external stimuli to affect the human body, both physiologically and psychologically.</span></p>\r\n<p><span>Brainwave data captured by an EEG has shown high levels of success in classifying mental states [3], which affect &ldquo;autonomic modulation of the cardiovascular system&rdquo; [4], Furthermore, there are existent studies investigating how music can influence a response in the autonomic nervous system. [5]</span></p>\r\n<p><span>This work is particularly interested in the amount of activity in the alpha brainwave frequency range. Increased activity in the alpha wave frequency range is &ldquo;usually associated with alert relaxation&rdquo;. [6] Methods intended to increase activity in the alpha wave frequency range through feedback, autogenic meditation, breathing exercises, and other techniques, are classified as alpha training.</span></p>\r\n<p><span>Brainwaves are generally faster and have higher frequencies during wakefulness, and occur at a lower frequency during deep sleep. Although alpha waves can occur between alertness and the beginnings of sleep, there is a difference between the physiological benefits of sleep, and those reported when there is greater activity in alpha. Perhaps the most basic distinction between alpha training and sleep is the conscious awareness and regulated breathing patterns - with the ability to control and adapt the alpha training for maximum benefit.</span></p>\r\n<p><span>Positive change (compared with the control group) in the amount of activity in alpha is what I will investigate here, since research has shown that stimulating activity within alpha causes muscle relaxation, pain reduction, breathing rate regulation, and decreased heart rate [6] [7] [8], This has also been used for reducing stress, anxiety and depression, and can encourage memory improvements, mental performance, and aid in the treatment of brain injuries. It is with these phenomena in mind that this work was first conceived and developed.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 3\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong></strong></p>\r\n<p><strong>Information on EEG:</strong></p>\r\n<p><span>An electroencephalogram (also know as an EEG) is an electrophysiological monitoring method used to record the electrical activity of the brain. A typical adult human EEG signal is between 10 and 100 </span><span>&mu;</span><span>V (microvolts) in amplitude when measured from the scalp. It was invented by German psychiatrist Hans Berger in 1929 and research into how brainwaves can be interpreted and modulated started as shortly thereafter. Using an EEG, you are able to directly measure neural activity and capture cognitive processes in real-time. Berger proved that alpha waves (also initially know as Berger waves) were generated by cerebral cortical neurons.</span></p>\r\n<p><span>In 1934, English physiologists Edgar Adrian and Brain Matthews first described the sonification of alpha waves derived from EEG data. [9] In doing so, they found that &ldquo;non-visual activities, which demand the entire attention of the subject (e.g. mental arithmetic) abolish the waves; other sensory stimulation which demand attention also do so&rdquo; [10], showing how concentration and thought processes affected activity in the alpha wave frequency range.</span></p>\r\n<p><span>The brainwave activity recorded in an EEG is a summation of the inhibitory and excitatory post synaptic potentials that occur across a neuronal membrane. [11]</span></p>\r\n<p><span>The measurements are taken by way of electrodes placed on the scalp. The readings are divided into five frequency bands, delineating slow, moderate, and fast waves. The bands, from slowest to fastest are:</span></p>\r\n<p><strong>Delta</strong><span>, with a range from approximately 0.5Hz&ndash;4Hz, which signifies deepest meditation or dreamless sleep</span><span>Theta</span><span>, from approximately 4Hz&ndash;8Hz,<br />signifying meditation or deep sleep.</span></p>\r\n<p><strong>Alpha</strong><span>, from approximately 8Hz&ndash;13Hz, representing quietly flowing thoughts. </span><span>Beta</span><span>, from approximately 13Hz&ndash;30Hz, which is a normal waking state.</span></p>\r\n<p><span>And<br /></span><strong>Gamma</strong><span>, from approximately 30Hz&ndash;42Hz<br />which is most active during simultaneous processing of information that engages multiple different areas of the brain.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 4\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong></strong></p>\r\n<p><strong>History of EEG use in music:</strong></p>\r\n<p><span>Physicist Edmond Dewan began the study of brainwaves in the early 1960s and developed a &lsquo;brainwave control system&rsquo;. The system detected changes in alpha rhythms which were used to turn lighting on or off. &ldquo;The light could also be replaced by &lsquo;an audible device that made a beep when switched on&rsquo;, allowing Dewan to spell out the phrase &lsquo; </span><span>I can talk</span><span>&rsquo; in Morse code&rdquo;. [9] Dewan subsequently met experimental composer Alvin Lucier which inspired the first actual brainwave composition.</span></p>\r\n<p><span>Alvin Lucier first performed </span><span>Music for Solo Performer </span><span>in 1965. It involved the composer sitting in a chair on stage, with his eyes closed while his brainwaves were recorded. The data from the recording was amplified and distributed to speakers set up around the room. The speakers were placed against different types of percussion instruments, so the vibration of the speakers would cause the instrument to sound.</span></p>\r\n<p><span>Lucier was able to control the percussion events through control of his cognitive functions, and found that a break in concentration would disrupt that control. Although mastery over the alpha rhythm was (and is) difficult, </span><span>Music for Solo Performer </span><span>greatly contributed to the field of experimental music and illustrated the depth of possibility in using EEG control over musical performance.</span></p>\r\n<p><span>Computer scientist Jacques J. Vidal published the paper </span><span>Toward Direct Brain-Computer Communication </span><span>in 1973, which first proposed the Brain-Computer Interface (BCI), which is a means of using the brain to control external devices.</span></p>\r\n<p><span>This was the very beginning of Brain-Computer Music Interfacing (BCMI) research, which has evolved into an interdisciplinary field of study &ldquo;at the crossroads of music, science and biomedical engineering&rdquo; [12]. BCMIs (also referred to Brain Machine Interfaces, or BMIs) are still in use today, and the field of research around them is still in its early stages.</span></p>\r\n<p><span>Paul Lehrer Ph.D., who I studied under at UMDNJ, contributed significant research to the field of psychophysics from the 1990s to today, with studies on biofeedback and stress management technics. Dr. Lehrer also set standards for music therapies and their uses as relaxation technics and their beneficial physiological affects by testing benefits amongst subjects with asthma. One of his recent research papers from 2014, </span><span>Heart Rate Variability Biofeedback: How and Why Does it Work? </span><span>[14] investigated the effectiveness of heart rate variability biofeedback (HRVB) as a treatment for a variety of disorders, as well as its uses for performance enhancement.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 5\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong></strong></p>\r\n<p><strong>Project Overview:</strong></p>\r\n<p><span>This project records EEG signals from the subject using four non-invasive dry extra- cranial electrodes from a commercially available MUSE EEG headband. Measurements are recorded from the TP9, AF7, AF8, and TP10 electrodes, as specified by the International Standard EEG placement system, and the data is converted to absolute band powers, based on the logarithm of the Power Spectral Density (PSD) of the EEG data for each channel. </span><span>Heart rate data is obtained through PPG measurements (although that data is not used in the current version of this project). EEG measurements are recorded in Bels/Db to determine the PSD within each of the frequency ranges.</span></p>\r\n<p><span>The EEG readings are translated into music in real time, and the subjects are instructed to employ deep breathing exercises while they focus on the musical feedback.<br /></span><span>The time-base for the musical events can be variable and based on the brainwave data, or set to a fixed clock, or some combination of the two .</span></p>\r\n<p><span>The use of scales, modes and chords, as well as rhythms, and performance characteristics, needed to be considered beforehand so the extraction of a finite set of parameters from the EEG data set could be parsed and used to produce a well-formed, dynamic, and recognizable piece of music.</span></p>\r\n<p><span>There are 3 main sections of this Max patch: </span></p>\r\n<p><strong>1: The EEG data capture section.<br />2: The EEG data conversion section.<br />3: the Sound generation and DSP section.</strong></p>\r\n<p><strong></strong></p>\r\n<p><strong>EEG data capture</strong></p>\r\n<p><span>The </span><span>EEG data capture </span><span>section receives EEG data from the Muse headband, which is converted to OSC data and transmitted over WiFi via the iOS app Mind Monitor. That data is then split into the five separate brainwave frequency bandwidths: delta, theta, alpha, beta and gamma. Additional data is also captured, including accelerometer, gyroscope, blink and jaw clench, in order to control for any artifacts in the data capture. Sensor connection data is used to visualize the integrity of the sensor&rsquo;s connection to the subject. PPG data is also captured for use in a future iteration of the project.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 6\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span><strong></strong></span></p>\r\n<p><span><strong>EEG data conversion</strong><br /></span><span>The second section, </span><span>EEG data conversion</span><span>, accepts the EEG bandwidth data</span></p>\r\n<p><span>representing specific event-related potential, and translates it to musical events.</span></p>\r\n<p><span>First, significant thresholds for each brainwave frequency bandwidth are defined. These are chosen based on average EEG measurements taken prior to the use of the musical feedback. When those thresholds are reached or exceeded, an event is triggered. Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics.</span></p>\r\n<p><span>This section is comprised of three subsections that format their data output differently, depending on the use case:</span></p>\r\n<p><span>1. </span><strong>Internal Sound Generation and DSP </strong><span>for use completely within the Max environment.<br />2. </span><strong>External MIDI </strong><span>for use with MIDI equipped hardware or software.<br />and</span></p>\r\n<p><span>3. </span><strong>External Frequency and gate</strong><span>, for use with modular synthesizer hardware.<br />Each of these can be used separately or simultaneously, depending on the needs of the&nbsp;</span><span>work.</span></p>\r\n<p><span>For the data conversion in this iteration of the project, the event-related potentials are mapped in the following way:<br /></span><span>Changes in </span><strong>alpha</strong><span>, relative to the predefined threshold, govern the triggering of notes, as well as the scale and mode.</span></p>\r\n<p><span>Changes in </span><strong>theta</strong><span>, relative to the threshold, influence note value.<br />Changes in </span><strong>beta</strong><span>, relative to the threshold, influence spatial qualities like reverberation and delay.<br />Changes in </span><strong>delta</strong><span>, relative to the threshold, influence the degree of spatial effects. Changes in </span><span>gamma</span><span>, relative to the threshold, influence timbre.</span></p>\r\n<p><span>Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis or set of standards.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 7\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong></strong></p>\r\n<p><strong>Sound generation and DSP</strong></p>\r\n<p><span>The third section is </span><span>Sound generation and DSP</span><span>. It is responsible for the final sonification of the data translated from the </span><span>EEG data conversion </span><span>section. This section includes synthesis models, timbral characteristics, and spatial effects.<br />This projects uses three synthesized voices created in Max 8 for the generative musical feedback. There are two subtractive voices that each use a mix of sine, sawtooth and triangle waves, and one fm voice.</span></p>\r\n<p><span>The timbral effects employed are waveform mixing, frequency modulation, and high pass, band pass and low pass filtering. The spatial effects used include reverberation, and delay. In addition to the initial settings of the voices, each of the timbral and spatial effects are modulated by separate event-related potential data captured by the EEG.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 8\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong></strong></p>\r\n<p><strong>Conclusions:</strong></p>\r\n<p><span>This project is a contemporary interpretation of an idea I've been interested in for many years, starting with investigation into bidirectional EKG biofeedback.</span></p>\r\n<p><span>My initial experience with the topic was during a university degree at Rutgers University, in psychophysics (underwritten by The University of Medicine and Dentistry of New Jersey). While at UMDNJ, I worked directly with the doctors who were at the forefront of psychophysiological research, whose work was focused on reducing stress in asthmatic subjects for the purposes of lessening the frequency of attacks. [13]</span></p>\r\n<p><span>At the time, the technology required to explore this idea was of considerable size, and prohibitively expensive, for all but medical or formally funded academic purposes. With the current availability of low-cost electroencephalography (EEG) devices and heart rate monitors, the possibility of autonomous exploration of these concepts has become a reality.</span></p>\r\n<p><span>The procedure, when using this work for the exploration of the physiological effects of neuro- and bi-directional feedback, starts with obtaining and comparing two data sets: a control and a therapeutic set. The control set records brainwave data without utilizing musical feedback or breathing exercises, while the therapeutic set records the brainwave data with them.</span></p>\r\n<p><span>Although this project is primarily concerned with changes in the alpha brainwave frequency range, changes in other brainwave frequency ranges are used to trigger events in the feedback in such a way as to provide cues that a course correction is required by the subject. This approach was adopted to ensure that a subject&rsquo;s loss of focus (and/or a drop in the Power Spectral Density of alpha) would not negatively affect the generation of novel musical feedback. Depending on the subject&rsquo;s state of relaxation (and the PSD of the other four EEG frequency ranges measured), the performance and phrasing of the musical feedback is designed to change in such a way that it is expected to encourage greater attention. With the help of consistent feedback, the hope is that the subject would be able to regain their focus.</span></p>\r\n<p><span>Preliminary data has shown that alpha readings were higher, on average, during the therapeutic trials. Also, a higher overall peak value was achieved during the therapeutic phase. This suggests that this feedback model is an effective way of increasing activity in the alpha brainwave frequency range, which is the beneficial </span><span>physiological and psychological effect I was hoping to find, although more data needs to be collected before any definitive conclusions can be drawn.</span></p>\r\n<p><span>At this point, the system has been tested and is functional, and further research can begin. The modular design of the system allows for any variables to be included or excluded, which will be necessary moving forward with the research, in order to more thoroughly test the foundational elements of the thesis, as well as any musicological</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 9\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>exploration and analysis that defining the musical feedback raises.</span></p>\r\n<p><span>In the meantime, I am already using the software as a compositional and performance system to create recorded works and live performances. I am also planning to mount the project as an interactive installation in a live setting and to create a tangible two- dimensional representation, in some visual language, of each session&rsquo;s narrative, compressing the entirety of the experience into a single frame.</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 10\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong></strong></p>\r\n<p><strong>Credits &amp; Acknowledgments:</strong></p>\r\n<p><span>IRCAM<br />Cycling &rsquo;74<br />Carol Parkinson, Executive Director of Harvestworks Melody Loveless, NYU &amp; Max certified trainer<br />Dr. Paul M. Lehrer and Dr. Richard Carr<br />InteraXon Muse electroencephalography headband James Clutterbuck (Mind Monitor developer)</span></p>\r\n<p><span style=\"text-decoration: underline;\"><strong>Contact Details:</strong></span></p>\r\n<p><em>Johnny Tomasiello</em></p>\r\n<p><em>johnnytomasiello@gmail.com</em></p>\r\n</div>\r\n</div>\r\n<img src=\"blob:https://forum.ircam.fr/e3942e1f-fe8d-4235-8e19-b531bb102826\" alt=\"page10image52478576\" width=\"157.125006\" height=\"0.878906\" /></div>\r\n</div>\r\n<div class=\"page\" title=\"Page 11\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>References:</span></p>\r\n<p><span>[1] </span><span>Brian Eno. &rdquo;Empty Formalism&rdquo;<br />Brian Eno in conversation with Thomas Oberender on &ldquo;Hexadome.&rdquo;</span></p>\r\n<p><span>[2] </span><span>J. Cage, R. Kostelanetz. </span><span>John Cage Writer: Previously Uncollected Pieces</span><span>. New York: Limelight (1993)</span></p>\r\n<p><span>[3] </span><span>J. J. Bird, A. Ekart, C. D. Buckingham, D. R. Faria. &ldquo;Mental Emotional Sentiment Classification with an EEG-based Brain-Machine Interface&rdquo;, International Conference on Digital Image &amp; Signal Processing (DISP&rsquo;19), Oxford, UK (2019)</span></p>\r\n<p><span>[4] </span><span>K. Madden and G.K. Savard. &ldquo;Effects of Mental State on Heart Rate and Blood Pressure Variability in Men and Women&rdquo; in </span><span>Clinical Physiology </span><span>15, 557&ndash;569 (1995)</span></p>\r\n<p><span>[5] </span><span>F. Riganello et al. &ldquo;How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness?&rdquo; in </span><span>Frontiers in Neuroscience </span><span>vol. 9, 461 (2015)</span></p>\r\n<p><span>[6] </span><span>H. Marzbani et al. &ldquo;Methodological Note: Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications&rdquo; in </span><span>Basic and Clinical Neuroscience Journal </span><span>vol. 7, 143&ndash;158 (2016)</span></p>\r\n<p><span>[7] </span><span>P.M. Lehrer and </span><span>R. Carr </span><span>&ldquo;</span><span>Stress Management Techniques: Are They All Equivalent, or Do They Have Specific Effects?&rdquo; in </span><span>Biofeedback and Self-Regulation&rdquo; </span><span>(1994)</span></p>\r\n<p><span>[8] </span><span>J. Ehrhart, M. Toussaint, C. Simon, C. Gronfier, R. Luthringer, G. Brandenberger. &ldquo;Alpha Activity and Cardiac Correlates: Three Types of Relationships During Nocturnal Sleep&rdquo; in </span><span>Clinical Neurophysiology </span><span>vol. 111, 940&ndash;946 (2000)</span></p>\r\n<p><span>[9] </span><span>B. Lutters, P. J. Koehler. &ldquo;Brainwaves in Concert: the 20th Century Sonification of the Electroencephalogram&rdquo; in </span><span>Brain </span><span>139 (Pt 10), 2809&ndash;2814 (2016)</span></p>\r\n<p><span>[10] A Matthews, &ldquo;The Berger Rhythm: Potential Changes From The Occipital Lobes in Man&rdquo; in </span><span>Brain </span><span>57 Issue 4, (December 1934)</span></p>\r\n<p><span>[11] M Atkinson, MD, &ldquo;How To Interpret an EEG and its Report&rdquo; (2010)</span></p>\r\n<p><span>[12] </span><span>E.R. Miranda. &ldquo;Brain&ndash;Computer Music Interfacing: Interdisciplinary Research at the Crossroads of Music, Science and Biomedical Engineering&rdquo; in E.R. Miranda, J. Castet, ed. </span><span>Guide to Brain-Computer Music Interfacing</span><span>. London: Springer-Verlag, 1&ndash;27 (2014)</span></p>\r\n<p><span>[13] </span><span>P.M. Lehrer et al</span><span>. </span><span>&ldquo;Relaxation and Music Therapies for Asthma among Patients Prestabilized on Asthma Medication&rdquo; in </span><span>Journal of Behavioral Medicine </span><span>17, 1&ndash;24 (1994)</span></p>\r\n<p><span>[14] </span><span>[10] P. M. Lehrer, R. Gevirtz. &ldquo;Heart Rate Variability Biofeedback: How and Why Does It Work?&rdquo; in </span><span>Frontiers in Psychology </span><span>vol. 5, 756 (2014)</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<div class=\"page\" title=\"Page 12\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>[15] S. A. Plotnikov et al. &ldquo;Artificial Intelligence-Based Neurofeedback&rdquo; in </span><span>Cybernetics and Physics </span><span>vol. 8, 287&ndash;291 (2019)</span></p>\r\n<p><span>[16] J. Cage, R. Kostelanetz. </span><span>John Cage Writer: Previously Uncollected Pieces</span><span>. New York: Limelight (1993)</span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p></p>",
        "topics": [
            {
                "id": 565,
                "name": "Biofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 330,
                "name": "Dsp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 277,
                "name": "Max 8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 564,
                "name": "Neurofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 563,
                "name": "Neuroscience",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 59,
                "name": "Synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20945,
            "forum_user": {
                "id": 20934,
                "user": 20945,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Tomasiello-modular_01b.png",
                "avatar_url": "/media/cache/8e/26/8e262109aba7469cf1a5c6158552e9f8.jpg",
                "biography": "Johnny Tomasiello is a multidisciplinary artist and composer-researcher, with a deep interest in expanded conceptualizations of sound, visuals, and time. His work employs methodologies across media, and is informed by research into neuroscience, psychophysics and biofeedback.  \n\nFocused on the relationship between perception and the mechanics of physiology, his immersive works, compositions, and performances reveal otherwise invisible processes in physiological and technological systems. Drawing on custom-built instruments and software, his work references mechanisms of expression and experience through data sonification, biofeedback, and reciprocal physiological systems.\n\nAs a performer, Tomasiello has produced live immersive performances and lectures featuring his interactive computer-assisted compositional performance systems and Brain-Computer Interfaces (BCI) that create, manipulate, and deconstruct audio and visuals, as well as physiological responses. He has lectured on the subject, staged live performances, scored films, and shown canvases and sound works in galleries and at institutions in the US and abroad.",
                "date_modified": "2026-02-12T19:09:20.143419+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "johnnytomasiello",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "moving-towards-synchrony-3",
        "pk": 1192,
        "published": true,
        "publish_date": "2022-07-12T18:54:13+02:00"
    },
    {
        "title": "Traces de l’expressivité : partition de flux de données gestuelles pour les œuvres interdisciplinaires",
        "description": "Résidence en recherche artistique 2017.18.\r\nAlireza Fahrang.\r\nEn collaboration avec les équipes Représentations musicales et Interaction Son Musique Mouvement de l’Ircam-STMS.",
        "content": "<h3 class=\"row\">R&eacute;sidence en recherche artistique 2017.18</h3>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p><strong>Traces de l&rsquo;expressivit&eacute; : partition de flux de donn&eacute;es gestuelles pour les &oelig;uvres interdisciplinaires</strong><br />En collaboration avec les &eacute;quipes<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/repmus/\">Repr&eacute;sentations musicales</a><span>&nbsp;</span>et<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/issm/\">Interaction Son Musique Mouvement</a><span>&nbsp;</span>de l&rsquo;Ircam-STMS.</p>\r\n<p><span>ns le cadre de la cr&eacute;ation d&rsquo;&oelig;uvres multidisciplinaires bas&eacute;es sur la musique, l&rsquo;importance de la communication entre les artistes de disciplines diff&eacute;rentes a conduit le compositeur &agrave; concevoir une partition universelle de haut niveau.</span>&nbsp;Ce nouveau paradigme doit nous permettre de formaliser une technique et une technologie afin de transmettre les intentions et les propos du compositeur aux chor&eacute;graphes, sc&eacute;nographes et tout autre artiste impliqu&eacute; dans la mise en sc&egrave;ne dramaturgique, visuelle ou sonore de l&rsquo;&oelig;uvre. Cette partition hybride consiste en une partition graphique et une partition de flux de donn&eacute;es gestuelles. Cette derni&egrave;re, l&rsquo;objet de la r&eacute;sidence en recherche musicale et artistique de l&rsquo;Ircam, vise &agrave; fournir sous forme de donn&eacute;es informatiques, le rendu des gestes sonores &eacute;lectroniques et instrumentaux. Les gestes physiques des protagonistes sont eux aussi traduits et formalis&eacute;s informatiquement via la partition de flux de donn&eacute;es. La luminosit&eacute; spectrale, l&rsquo;harmoniciste, l&rsquo;intensit&eacute;, la densit&eacute;, la qualit&eacute; des mouvements, etc., constituent les param&egrave;tres significatifs de cette partition qui seront utilis&eacute;s, entre autres, dans la sc&eacute;nographie, l&rsquo;&eacute;clairage, la vid&eacute;o et l&rsquo;installation. <span>Dans un deuxi&egrave;me temps, le travail sera focalis&eacute; sur la formalisation d&rsquo;une technique et le d&eacute;veloppement d&rsquo;une technologie, celles-ci permettant de r&eacute;aliser la premi&egrave;re tentative d&rsquo;une interface de la partition hybride de haut niveau.&nbsp;</span></p>\r\n<h6></h6>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Alireza Fahrang</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\" style=\"text-align: center;\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202018/.thumbnails/alireza_farhang2.jpg/alireza_farhang2-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Alireza Farhang prend ses premi&egrave;res le&ccedil;ons de musique aux c&ocirc;t&eacute;s de son p&egrave;re. Il &eacute;tudie ensuite le piano aupr&egrave;s d'Emmanuel Melikaslanian et de Rapha&euml;l Minaskanian et la composition &agrave; l&rsquo;universit&eacute; de T&eacute;h&eacute;ran avec Alireza Machayeki. &Agrave; la suite de ses &eacute;tudes, il enseigne &agrave; l&rsquo;universit&eacute; de T&eacute;h&eacute;ran et fonde sa propre &eacute;cole de musique. En 2002, il choisit d&rsquo;approfondir ses connaissances aupr&egrave;s de Michel Merlet &agrave; l'&Eacute;cole normale de musique de Paris. B&eacute;n&eacute;ficiant de la bourse Albert Roussel, il obtient ses dipl&ocirc;mes sup&eacute;rieurs en composition et en orchestration. Il suit &eacute;galement les cours de composition d'Ivan Fedele au conservatoire de Strasbourg, et a l&rsquo;occasion de travailler avec Toshio Hosokawa, Michael Jarrell, Hans Peter Kyburz, Brice Pauset, Yan Maresz, Tristan Murail, Olga Neuwirth, Kaija Saariaho et G&eacute;rard Pesson. Il participe au cursus de composition et d'informatique musicale de l&rsquo;Ircam dans le cadre du programme europ&eacute;en ECMCT en partenariat avec la Technische Universit&auml;t, l'Universit&auml;t der K&uuml;nst et la Hochshule f&uuml;r Musik Hanns Eisler &agrave; Berlin. B&eacute;n&eacute;ficiant de la double formation en musique occidentale et musique persane le menant &agrave; conjuguer ces deux univers musicaux, la question du m&eacute;tissage culturel et la probl&eacute;matique d'incompatibilit&eacute; entre les valeurs traditionnelles et modernes font l'objet de ses recherches compositionnelles. Fondateur du concours de composition Musica Ficta, il est l'un des membres fondateurs de l'Association des compositeurs iraniens de la musique contemporaine, ACIMC. Sa musique est jou&eacute;e par des ensembles de renom dans de nombreux pays.</p>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://www.alirezafarhang.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.alirezafarhang.com</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "traces-of-expressivity-high-level-score-of-data-stream-for-interdisciplinary-works",
        "pk": 20,
        "published": true,
        "publish_date": "2019-03-21T12:46:29+01:00"
    },
    {
        "title": "Présentations par le Conservatoire de Strasbourg et la Haute Ecole des Arts du Rhin",
        "description": "Les professeurs et étudiants en composition électroacoustique et instrumentale du Conservatoire de Strasbourg et de la Haute Ecole des Arts du Rhin présenteront divers projets de recherche, de développement et de composition impliquant des processus et des techniques en temps réel, assistés par ordinateur et interactifs.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" height=\"330\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Conservatoire de Strasbourg et Haute Ecole des Arts du Rhin</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/conservatoiredestras/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\">-</p>\r\n<p style=\"text-align: justify;\">Pr&eacute;sentations par :</p>\r\n<ul style=\"text-align: justify;\">\r\n<li><strong>Tom Mays - CRT : un patch Max pour la composition et la performance en temps r&eacute;el - pr&eacute;sentation de la derni&egrave;re version</strong></li>\r\n</ul>\r\n<p style=\"padding-left: 30px; text-align: justify;\">CRT (Composition in Real Time) est un patch Max con&ccedil;u pour faciliter la composition en temps r&eacute;el, qu'il s'agisse d'une introduction ou d'un niveau avanc&eacute;.</p>\r\n<ul style=\"text-align: justify;\">\r\n<li><strong>Lorenzo PANICONI - <em>Composer de la musique &eacute;lectroacoustique avec Antescofo : travailler l'esth&eacute;tique de la partition augment&eacute;e</em></strong></li>\r\n</ul>\r\n<p style=\"padding-left: 30px; text-align: justify;\">Pr&eacute;sentation d'un projet de recherche (et) de cr&eacute;ation en cours, dont l'objectif principal est d'apprendre et d'int&eacute;grer Antescofo comme outil dans ma pratique artistique. Je tire parti de certaines des principales caract&eacute;ristiques d'Antescofo (partition centralis&eacute;e, d&eacute;finition des fonctions et du processus, etc.) pour faciliter l'&eacute;criture, la lecture, l'interpr&eacute;tation et l'ex&eacute;cution de compositions &eacute;lectroacoustiques, ainsi que l'adaptation de pi&egrave;ces pr&eacute;existantes.</p>\r\n<ul style=\"text-align: justify;\">\r\n<li><strong>Guilherme RIBIERO DA CUNHA - <em>La voix hybride, le corps vocal et l'&eacute;lectro-narrativit&eacute; : deux pi&egrave;ces bresiliennes post-pand&eacute;miques pour voix f&eacute;minine et &eacute;lectronique</em></strong></li>\r\n</ul>\r\n<p style=\"padding-left: 30px; text-align: justify;\">Dans cette pr&eacute;sentation, nous explorerons les pi&egrave;ces \"corriente de agua\" (2022) et \"Maria ! (2022), de Guilherme Ribeiro. Nous analyserons la diversit&eacute; expressive de la voix utilis&eacute;e dans ces pi&egrave;ces, ainsi que sa propre hybridation, passant de la voix parl&eacute;e &agrave; la voix chuchot&eacute;e, chant&eacute;e, &eacute;touff&eacute;e, cri&eacute;e, m&acirc;ch&eacute;e, etc. Nous examinerons &eacute;galement la construction des parties &eacute;lectroniques (fixes) de chaque pi&egrave;ce, agissant comme un &eacute;l&eacute;ment narratif sonore dans la po&eacute;tique et la structure sonore de la pi&egrave;ce. Enfin, nous verrons comment la pand&eacute;mie du virus Covid-19 a influenc&eacute; la composition et le travail de collaboration &agrave; distance entre les chanteurs et le compositeur dans ces deux pi&egrave;ces aux aspects si corporels dans le domaine vocal</p>\r\n<ul style=\"text-align: justify;\">\r\n<li><strong>Alonso HUERTA - <em>Une alternative num&eacute;rique aux diffuseurs sonores originaux des ondes Martenot en utilisant l'analyse du mod&egrave;le r&eacute;sonant</em></strong></li>\r\n</ul>\r\n<p style=\"padding-left: 30px; text-align: justify;\">Travail en cours sur l'&eacute;mulation num&eacute;rique haute fid&eacute;lit&eacute; des l&eacute;gendaires diffuseurs sonores des ondes Martenot en utilisant des mod&egrave;les r&eacute;sonants (maxmodres) dans Max et RNBO</p>\r\n<ul style=\"text-align: justify;\">\r\n<li><strong>Laurent WEREY - <em>G&eacute;n&eacute;rer de la vid&eacute;o &agrave; partir de l'audio - travailler avec des g&eacute;n&eacute;rateurs 3D AI</em></strong></li>\r\n<li><strong>Jad EL KHECHEN - <em>Le geste au service de l'expression musicale </em></strong></li>\r\n</ul>\r\n<p style=\"padding-left: 30px; text-align: justify;\">CatRt Mubu et la reconnaissance de la main par cam&eacute;ra</p>\r\n<ul style=\"text-align: justify;\">\r\n<li><strong>Lucas Andr&eacute;a MOREAU - <em>MAX/RNBO, vers des dispositifs l&eacute;gers et autonomes</em></strong></li>\r\n</ul>\r\n<p style=\"padding-left: 30px; text-align: justify;\">Exploration de la relation entre Raspberry Pi et l'environnement de Max/RNBO</p>\r\n<ul style=\"text-align: justify;\">\r\n<li><strong>Raphael CORNIGLION FACCIOLI - <em>&Eacute;puisement et limites du traitement en temps r&eacute;el - entre pr&eacute;sence sonore et aes num&eacute;rique</em></strong></li>\r\n</ul>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\"><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2435,
            "forum_user": {
                "id": 2433,
                "user": 2435,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/57f75a148a5ee2135629bcbe5dc2fadd?s=120&d=retro",
                "biography": "The composition department at the Strasbourg Conservatory is shared with composition at the Superior Music Academy, as part of the HEAR (Haute Ecole des Arts du Rhin). There are two composition classes: Instrumental, Vocal, and mixed Composition; and Electroacoustic Composition. They work in tandem to encourage the creation of an active repertory of mixed music – for instruments and electronics, as well as a full range of pieces from solo instrument to orchestra, and from live electronics to fixed media and acousmatic pieces, to interactive installations. Daniel D'Adamo is the principal professor of instrumental composition, replaced temporarily by Ivan Solano for the 2023-2024 year. Tom Mays is the principal professor of electroacoustic composition. Between the Conservatoire and the HEAR, students range from debutant to university level - bachelor, master, and doctorate.\n\nThe university level music school is combined with visual arts and design to form the multi-site and multi-disciplinary Haute Ecole des Arts du Rhin (HEAR). Projects of all kinds are encouraged between music and visual and special arts.\n\nThe Strasbourg Conservatory has been a Forum member since 2013/2014.",
                "date_modified": "2025-12-02T12:05:51.417943+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 725,
                        "forum_user": 2433,
                        "date_start": "2025-09-09",
                        "date_end": "2026-12-09",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 20,
                        "is_valid": true
                    }
                ]
            },
            "username": "conservatoiredestras",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "presentations-by-strasbourg-conservatory-an-haute-ecole-des-arts-du-rhin",
        "pk": 2776,
        "published": true,
        "publish_date": "2024-02-28T17:28:58+01:00"
    },
    {
        "title": "Musicians Auditory Perception",
        "description": "Presented during the IRCAM Forum @NYU 2022\r\n\r\nCo-authored by Berk Schneider (University of California, San Diego) [Co-First Author], Florian Grond (McGill University) [Co-First Author], Jeanne Côté (McGill University), Pedram Diba (McGill University), Min Seok Peter Ko (UCSD), Sang Song (UCSD), Tiange Zhou (UCSD), Shahrokh Yadegari  (UCSD) [Principal Investigator].",
        "content": "<p><a href=\"https://www.actorproject.org/project-reports/musicians-auditory-perception?fbclid=IwAR30zyXCV5KYD_xAIW1n9GKWzD6opyuEsIVwsNv4pwvJinHbYB87Ds193jc\">https://www.actorproject.org/project-reports/musicians-auditory-perception?fbclid=IwAR30zyXCV5KYD_xAIW1n9GKWzD6opyuEsIVwsNv4pwvJinHbYB87Ds193jc</a><a href=\"https://www.actorproject.org/project-reports/musicians-auditory-perception?fbclid=IwAR30zyXCV5KYD_xAIW1n9GKWzD6opyuEsIVwsNv4pwvJinHbYB87Ds193jc\" title=\"Musicians Auditory Perception\"></a></p>\r\n<p></p>\r\n<p>The purpose of the Musician Auditory Perception (MAP) project is to collect quantitative data via sonic ethnography in order to promote and analyze, both literally and metaphorically, (a) sonic collaboration between auditory learners, (b) modes of sound information gathering, and (c) the creative expression of musicians, while disrupting common pedagogic practices that reinforce hierarchical education. Auditory learning is not necessarily a linear process, but a dynamic one &mdash; a skillset synergistic and deeply connected with creation. Therefore, MAP will enable three student composer-performer duos from two Analysis, Creation, and Teaching of Orchestration (ACTOR) partner institutions, UC San Diego (UCSD) and McGill University (McGill), to document their creation processes with binaural recording devices and first-person vision &mdash; captured by earpiece microphones and wearable HD cameras &mdash; effectively mapping audiovisual boundary objects. The outcome being that these sonic and visual boundary objects promote skill sharing between all participants by bridging differences in perception during the creation and reproduction of musical timbres, allowing a digital transfer of knowledge via an individual perspective in the time of the Covid-19 pandemic.</p>\r\n<p>Through analyzation of boundary objects, MAP&rsquo;s interdisciplinary-participatory research design seeks to understand how musicians balance cognitive and technical dimensions within their practice in order to produce timbres that cannot necessarily be measured in totality, especially when it comes to unearthing tacit knowledge, where typical interviews and text-based case studies are only partially successful. How does ACTOR&rsquo;s developing ontological classification of timbre, including its descriptors, interact with tacit knowledge and the epistemic authority of each participating musician, e.g., gut feeling with know-how, creativity with problem-solving, intuition with skills, and perceptions or judgements with lessons learned? In addition, a cross-referencing of thematic analysis through quantitative thick descriptions and signal processing of binaural audio will provide more objective constructs for evaluation. For example, If a sound is deemed &ldquo;bright&rdquo; by the majority of participants the term will be measured in conjunction with its spectral centroid.</p>",
        "topics": [],
        "user": {
            "pk": 17065,
            "forum_user": {
                "id": 17062,
                "user": 17065,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Diba_Head-Shot.JPG",
                "avatar_url": "/media/cache/b1/6c/b16c46e64f180833701e15bd09ef37b2.jpg",
                "biography": "Pedram Diba (b. 1993) is an Iranian-American composer whose whose work investigates how sound, space, light, and gesture function as interrelated compositional materials. His music has been praised for its “powerful interaction and striking richness of components” (ResMusica) and its “deep sense of musical energy and great attention to the organicity and morphological profiles of sound” (Pierre Jodlowski).\n\nDiba's works have been presented at ICMC, SICMF, SEAMUS, IRCAM Forum, Splice, and New Music Gathering, and performed at venues including the DiMenna Center (New York), Le CENTQUATRE-PARIS, Constellation Chicago, CKL Stage (Seoul), and Le Gesù (Montreal). A member of the ACTOR Project since 2019, he has contributed to projects like the CORE Ensemble and Space As Timbre, integrating technology and collaborative practices.\n\nDiba earned his B.M. at the University of Oregon, his M.M. at McGill University under Philippe Leroux, and completed IRCAM's Cursus under Pierre Jodlowski and Claudia Scroccaro. He is currently a Ph.D. candidate in composition and music technology at Northwestern University. His works are published by BabelScores.",
                "date_modified": "2025-08-26T08:54:49.340031+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 482,
                        "forum_user": 17062,
                        "date_start": "2023-10-05",
                        "date_end": "2025-10-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 370,
                                "membership": 482
                            },
                            {
                                "id": 409,
                                "membership": 482
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "pedramdiba",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "musicians-auditory-perception",
        "pk": 1270,
        "published": true,
        "publish_date": "2022-08-28T20:54:38+02:00"
    },
    {
        "title": "BARS: A Multiplayer Sensorial Experience - COVO collective",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>By <a href=\"https://forum.ircam.fr/profile/covo/\">COVO collective</a></p>\r\n<p>As society became more attached to virtualisation, the need for digital spaces to socialise and entertain arose. While the visual aspect of those spaces keeps evolving, the audio experience falls behind.&nbsp;<br />To solve this issue we created a multiplayer online club that users can explore with their avatars, interacting with each other, and artists can perform a Dj-set enhanced by a 5.1 surround setup dedicated to spatial ambient noise and effects. Additionally, the user can balance the two audio streams to better suit his preferences.<br />While this configuration makes the experience more immersive, it&rsquo;s far from what technology can achieve. A true virtual world makes it possible to hear and see beyond the boundaries of our reality and have an experience that&rsquo;s both personal and collective.<br />That&rsquo;s why the club is a cage of an eco-futuristic extraterrestrial menagerie with neoncore decorations and the user can engage in brief single-player optional experiences called &ldquo;Dreams&rdquo;, where they have to complete a path by moving in four dimensions. There are different types of Dreams, each user gets assigned one and a corresponding stem gets added to the current artist&rsquo;s performance. This stem is spatially modulated according to its position in the fourth dimension.<br />In essence, our project creates a multiplayer sensorial experience focused on new ways to enjoy musical performances and social space, while fully taking advantage of the potential of a non-physical world and making by interactive and immersive.</p>",
        "topics": [
            {
                "id": 1116,
                "name": "covo",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1120,
                "name": "djset",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1118,
                "name": "multiplayer",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1121,
                "name": "rap",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1122,
                "name": "social experience",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1119,
                "name": "spacial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1117,
                "name": "virtualisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32921,
            "forum_user": {
                "id": 32873,
                "user": 32921,
                "first_name": "Covo",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Logo_Small.png",
                "avatar_url": "/media/cache/b4/15/b4150ab6c3ff5ca68d47fd98e52a3a42.jpg",
                "biography": "We are an Italian collective specializing in music, visual art, and creative software development.\nThe vision of the collective is inspired by 90’s hip-hop crews filtered through a contemporary narrative. \nRap music meets the sound of the latest music synthesis technologies, visual artists turn to digital mediums to create both 2D and 3D artworks, and software development expands the horizons of our creative outlets.\nOur journey in music digitalization started when we created Covo Theatre, our live concert platform, where we held Apolide: a virtual event that took place on a wrecked space station in which we performed the music from our last 2 albums.\nIn 2022 we collaborated with the Italian label Pluggers to make “Fuck Pop”, an online community-based project that combines music with decentralized technologies which brings together 80 creatives from all over Italy. \nCovo is formed by Filippo Fernando Palla (aka Fenice) as creative & executive director, rapper and producer, Federico Sodini (aka Delt4) and Giovanni Virguti (aka Raudo) as rappers, Giulio Gentile (aka Ana) as Visual Department supervisor and graphic designer, and Luigi Giardino (aka Gardenn) as videomaker.",
                "date_modified": "2023-02-03T08:45:26+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "covo",
            "first_name": "Covo",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "bars-a-multiplayer-sensorial-experience",
        "pk": 2036,
        "published": true,
        "publish_date": "2023-02-03T06:19:18+01:00"
    },
    {
        "title": "Workshop SPAT Revolution by Gaël Martinet",
        "description": "Discover what’s new in SPAT Revolution and how it impacts yourdaily workflow.\r\nLearn how in-software new features and DAW plug-in improvements streamline your workflow and open up advanced possibilities for immersive audio production.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>This technical workshop offers an exclusive preview of SPAT Revolution 26.03, the upcoming March release that introduces significant architectural and workflow enhancements for immersive audio production. Designed for composers, sound designers, and audio engineers working in advanced spatial environments, this session will provide a deep technical understanding of the new tools, data flows, and integration mechanisms introduced in this version.<br />We will explore in detail the new feature set of SPAT Revolution 26.03, including:</p>\r\n<p><strong>&bull; Internal Playback, Recording, and Automation Engine</strong></p>\r\n<p>Dive into the build-in audio engine enabling native playback, and input and automation capture directly inside SPAT Revolution. This includes support for ADM file import, improved synchronization workflows, and external synchronization via timecode.</p>\r\n<p><strong>&bull; Integrated Animation System</strong><br />Learn how the new built‑in animation engine handles low‑frequency oscillators, free‑form and trajectory curves. We will break down the data structures involved, how animation streams interact with object metadata and synchronization, and the implications for real‑time rendering in immersive workflows.<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8847f20151082c494648a35d9f0e9520.png\" /></p>\r\n<p><strong>&bull; Object-Level FX</strong><br />Understand the expanded DSP layer, including per‑object audio processing, advanced master and output EQ stage. We will examine the processing capabilities, and the integration with MiRA Analyzer software.</p>\r\n<p><strong>&bull; Unified Cue System for Event Management</strong><br />Explore the new cue architecture, enabling deterministic control of playback, animations, and snapshot recalls, OSC and MIDI output messages, using MIDI, timecode, or OSC. This module is especially relevant for performance environments, installations, and automated spatial workflows requiring precise sequencing and real‑time triggering.</p>\r\n<p><strong>&bull; Protection Zone Morphing for Diverse Speaker Arrays</strong><br />Gain insight into the updated protection zone model, which enhances the support of spherical, polar, and hybrid speaker layouts. We will analyze how SPAT Revolution morphs protection geometry to maintain spatial coherence across varying speaker distributions.</p>\r\n<p><strong>&bull; Enhanced DAW Plug‑in and Integration Layer</strong><br />Discover the new plug‑in UI, allowing direct configuration of speaker arrangements from within your DAW. This includes improved bidirectional communication, parameter exposure models, and a streamlined workflow for hybrid in‑DAW / in‑SPAT production pipelines.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0087d6d1683d0294c2fc7c67f32b5eab.png\" /></p>\r\n<p><br /><br />This workshop is ideal for professionals seeking a deeper technical understanding of SPAT Revolution&rsquo;s internal systems, upcoming capabilities, and integration strategies. As a preview session, it will highlight not only the features of the March 26.03 release, but also their impact on real‑world immersive production workflows and system design.</p>",
        "topics": [],
        "user": {
            "pk": 44168,
            "forum_user": {
                "id": 44110,
                "user": 44168,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/8276b42ec6c8a92c38ad023905905719?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-02-20T11:18:15.526200+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nicolasflux",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "workshop-spat-revolution-by-gael-martinet",
        "pk": 4405,
        "published": true,
        "publish_date": "2026-02-20T11:33:16+01:00"
    },
    {
        "title": "Lime – First Fully Structured Rhythmic Composition with a Real-Time Plant Co-Performer",
        "description": "Lime is the first known rhythmic and structured musical composition in which a lemon plant performs in real time as a co-author and co-performer of a musical piece. The plant plays a piano through the Bamboo device (by Music of the Plants) with no signal alteration or modulation, keeping a tempo of 176 BPM ±50ms, alongside human performers.\n\nThe piece is structured in three main sections:\n\n    Intro: Plant plays piano + classical guitar arpeggio by Mazzarani\n\n    Development: Addition of drums, bass, electric guitar, organ — the plant keeps steady rhythmic phrasing on piano\n\n    Solo/Outro: Guitar solo and plant solo intertwine, leading to a fade-out\n\nAll instruments (guitars, drums, organ) are played or composed by Mazzarani using Fruity Loops 11 and Adobe Audition for multitrack assembly, with no signal alteration applied to the plant’s output.",
        "content": "<p>Title: Lime &ndash; A Structured Musical Composition with a Plant as Real-Time Rhythmic Co-Performer</p>\n<p>Author: Cesare Mazzarani</p>\n<p>Abstract:<br>This paper presents Lime (2025), the first fully structured musical composition featuring a plant (lemon tree) as a real-time rhythmic co-performer. Using the Bamboo device (by Music of the Plants), the plant's electrical activity is converted into MIDI data that triggers a digital piano in sync with a constant metronome of 176 BPM, with a negligible timing deviation (&plusmn;50 ms). The composition follows a clear formal structure: introduction, development, and resolution, with human and vegetal performers maintaining rhythmic cohesion.</p>\n<p>The work challenges the boundaries between performer and instrument, expanding the concept of authorship to non-human biological entities. No modulation or signal alteration is applied to the plant&rsquo;s output. All human instrument tracks (classical/electric guitar, bass, drums, organ) were performed or composed by the author.</p>\n<p>The piece opens new questions about bio-sonification, rhythmic entrainment between species, and plant-computer-human performance systems.<a href=\"https://music.amazon.it/albums/B0DZPK4WW4\" title=\"Lime\">https://music.amazon.it/albums/B0DZPK4WW4</a></p>",
        "topics": [],
        "user": {
            "pk": 124260,
            "forum_user": {
                "id": 124096,
                "user": 124260,
                "first_name": "cesare",
                "last_name": "mazzarani",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f3ffc3af66657934f664447e26a3fbdd?s=120&d=retro",
                "biography": "Cesare Mazzarani (born 26 September 1974) is an Italian multi-instrumentalist author, composer and musician. In December 2022, he decided to move to a charming 18th-century farmhouse located in the countryside of Tuscia Viterbese, with the aim of moving away from the frenzy of urban life and embracing a lifestyle in harmony with nature. In 2023, Mazzarani began a pioneering experimentation with the music of plants, using an innovative electronic tool created and marketed by the brand, Music of the Plants that measures the differential of electrical potential between the soil and the leaves, translating the biochemical life of the plants into musical notes. Initially, his goal is to decipher a musical personality of plants. After recording and analyzing the data of 20 different plant species, Mazzarani does not find musically relevant elements and decides to focus on the electrical activity of plants, measuring the electrical potential between the soil and the leaves. These electrical peaks are interpreted as pressure variations, fundamental for the process of absorption of water and the minerals necessary for the life of the plant. From the measurements made, a surprising understan",
                "date_modified": "2025-07-16T21:07:33.469215+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cesaremazzarani",
            "first_name": "cesare",
            "last_name": "mazzarani",
            "bookmarks": []
        },
        "slug": "lime-first-fully-structured-rhythmic-composition-with-a-real-time-plant-co-performer",
        "pk": 3556,
        "published": true,
        "publish_date": "2025-07-16T20:43:38.378580+02:00"
    },
    {
        "title": "'Qu'est-ce qu'un nom ?' L'IRCAM, MPEG-7 et la normalisation de la description audio - Landon Morrison",
        "description": "Le projet CUIDADO, dirigé par des chercheurs de l'IRCAM de 2000 à 2003, visait à établir une norme industrielle pour la description du contenu audio, en contribuant au développement de Studio Online (SOL), du Descriptor de l'IRCAM et de la norme MPEG-7, avec des recherches explorant ses diverses applications et son impact sur les contextes académiques et populaires.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\">-</p>\r\n<p style=\"text-align: justify;\">Pr&eacute;sente par : Landon Morrison<br /><a href=\"https://forum.ircam.fr/profile/landomo/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\">-</p>\r\n<p>De 2000 &agrave; 2003, une &eacute;quipe de chercheurs de l'IRCAM a dirig&eacute; le projet CUIDADO, en collaboration avec une assembl&eacute;e internationale d'acteurs universitaires, d'entreprises et de gouvernements, afin d'&eacute;tablir une norme industrielle pour la description du contenu audio dans les applications num&eacute;riques (Peeters et al. 2000). Sur le plan interne, ce travail trouve son origine dans la recherche sur l'association de sons avec des descripteurs s&eacute;mantiques, qui a permis de d&eacute;velopper une grande base de donn&eacute;es d'&eacute;chantillons instrumentaux appel&eacute;e Studio Online (SOL) et un syst&egrave;me de recherche d'informations musicales appel&eacute; Descriptor de l'IRCAM. En externe, le travail visait &agrave; produire une taxonomie de descripteurs pour la nouvelle norme MPEG-7, qui comprenait des dizaines de descripteurs de bas niveau regroup&eacute;s en plusieurs cat&eacute;gories (par exemple, temporel, &eacute;nerg&eacute;tique, spectral, harmonique et perceptuel), et qui les reliait &agrave; des repr&eacute;sentations s&eacute;mantiques de haut niveau du son (instrument, &eacute;v&eacute;nement, humeur, tonalit&eacute;, etc.) &agrave; l'aide d'algorithmes d'indexation de la musique.</p>\r\n<p style=\"text-align: justify;\">-&nbsp;</p>\r\n<p>Dans cet article, je retrace les int&eacute;r&ecirc;ts convergents de l'IRCAM et de ses collaborateurs qui cherchent &agrave; construire un syst&egrave;me g&eacute;n&eacute;ral permettant aux utilisateurs de \"manipuler des contenus audio/musicaux par le biais d'une sp&eacute;cification de haut niveau, con&ccedil;ue pour correspondre aux structures cognitives humaines impliqu&eacute;es dans la perception auditive\" (Vinet et al. 2002, p. 197). En m&ecirc;me temps, j'observe des applications divergentes du syst&egrave;me dans des contextes acad&eacute;miques et populaires ; le premier inclut des utilisations compositionnelles dans le programme d'orchestration assist&eacute;e par ordinateur Orchids (maintenant Orchidea), tandis que le second inclut une gamme d'applications dites \"d'empreintes audio\", telles que l'identification automatique de chansons et la reconnaissance de la parole. En cartographiant les connexions entre les discours scientifiques et les pratiques sonores circulant au sein (et en dehors) du projet CUIDADO, ma recherche met en lumi&egrave;re un r&eacute;seau de facteurs intellectuels, culturels, mat&eacute;riels et &eacute;conomiques qui ont contribu&eacute; &agrave; l'&eacute;mergence d'une nouvelle norme de m&eacute;tadonn&eacute;es audio et ont ciment&eacute; son statut en tant qu'&eacute;l&eacute;ment d'une infrastructure d'information mondiale. L'article conclut en examinant les techniques perceptives qui sous-tendent cette conjoncture de tendances, en attirant l'attention sur leur sp&eacute;cificit&eacute; historique et culturelle comme moyen de remettre en question les hypoth&egrave;ses sur la musique et la poursuite d'une connaissance sonore universelle.</p>\r\n<p style=\"text-align: justify;\">-<br /><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\"><br /><strong>Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</strong></a><strong>&nbsp;</strong></p>\r\n<div>\r\n<div dir=\"ltr\">\r\n<div>\r\n<div>\r\n<div><span size=\"2\"></span></div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 2720,
            "forum_user": {
                "id": 2718,
                "user": 2720,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Landon-Morrison_photo.jpeg",
                "avatar_url": "/media/cache/8f/e5/8fe5df065eaadb4a712c618e92252c5c.jpg",
                "biography": "Landon Morrison is a music theorist who studies the role of technological mediation in 20th- and 21st-century sonic practices, focusing on timbre, microtonality, popular music, and the history of psychoacoustics. His research aims to develop a cross-disciplinary perspective on new musical media by bringing theory into dialogue with surrounding discourses from science, technology, sound and cultural studies, and by combining analytical approaches with historiography, archival methods, and ethnographic fieldwork. Recent publications include articles in Archival Notes (2022), Kalfou (2022), Music Theory Online (2021), Circuit (2018, 2019), and Nuove Musiche (2018), chapters in the Oxford Handbook of Time in Music (2021) and the Oxford Handbook of Spectral Music (2023; co-authored with Amy Bauer), and an hour-long episode on SMT-Pod: The Society for Music Theory Podcast (2022). Morrison is currently a Research Associate at Imperial College London, and starting in Fall 2024, he will be an Assistant Professor of Music Theory at the Eastman School of Music at University of Rochester.",
                "date_modified": "2024-11-22T21:02:01.902879+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 765,
                        "forum_user": 2718,
                        "date_start": "2017-06-07",
                        "date_end": "2025-03-11",
                        "type": 0,
                        "keys": [
                            {
                                "id": 339,
                                "membership": 765
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "landomo",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "quest-ce-quun-nom-lircam-mpeg-7-et-la-normalisation-de-la-description-audio-landon-morrison",
        "pk": 2730,
        "published": true,
        "publish_date": "2024-02-14T16:54:25+01:00"
    },
    {
        "title": "Celestial Armillary and Ubiquitous Wave - Cainy Yiru Yan",
        "description": "Presentation during the Ircam Forum Workshop 2023 In Paris",
        "content": "<p><strong></strong></p>\r\n<p><strong><em>Celestial Armillary and Ubiquitous Wave is a multimedia experience combining Spatial Audio and Virtual Reality experience that explores the cognition of sound and cosmology.</em></strong></p>\r\n<p><strong><em>Celestial Armillary and Ubiquitous Wave </em></strong><span>is a multimedia two-perspective (on-site/ virtual) experience of the same theme that explores the cognition of sound and the cosmos in a multisensory context.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span>This project is inspired by modern and ancient Chinese observational cosmology,&nbsp; we hope to translate the model through sound and visual language to create a new version&mdash; a passage that can link the past and now. In the 4th century B.C., Chinese ancients began to use the armillary sphere to measure and interpret celestial objects. It was used to construct perceptions of the external world. In this age of modern technology, astronomical data measurement and sonification are also iterating to explore the human-universe relationship. A new awareness of the universe is provoked by utilizing Higher-Order Ambisonics(HOA) sound experience and Virtual Reality experience. These two experiences perform in parallel and create a mirror heterotopia.</span></p>\r\n<p>&nbsp;</p>\r\n<p><span><img alt=\"\" src=\"/media/uploads/user/63bd20f53360b3fd8b3585bd82e0f385.png\" /><img alt=\"\" src=\"/media/uploads/user/4e34f3646a5dec0e54c1583520410748.jpg\" /></span></p>\r\n<p>&nbsp;</p>\r\n<p><span>The first spatial sound experience transports the audience to the center of a giant armillary in space. At the same time, the moving image creates a &ldquo;remeasurement&rdquo; of the Asian Astronomical map and responds to the experimental music. The rotation of the giant armillary sphere accompanies with various Chinese instruments, such as the Zither, Flute, Xun, Drums, etc. Starting from the Sun, it will take the audience on a slow Astronomical sound wave journey through the nine planets.</span></p>\r\n<p><strong></strong></p>\r\n<p><span>In our second virtual reality experience, the ambisonic sound and interactive experience immerse people into a world of space measurement. By 6Dof, each step the audience takes changes the armillary sphere and its distance from the planets. The audience is encouraged to use their entities in the virtual space to measure the space-time transformation of the universe. Through the perception of spatial changes in the armillary sphere and the planet, this experience amplifies the sense of embodiment.</span></p>\r\n<p><span>Playing a role as a key, the multi-sensory experience will open the threshold to a cosmic archaeological experience of the universe for the audiences.</span></p>\r\n<p><span></span></p>\r\n<p><strong>This Installation will be presented during</strong></p>\r\n<p><strong>Forum Ircam Workshop 29-31 March 2023</strong></p>\r\n<p class=\"wys-highlighted-paragraph\"><a href=\"https://forum.ircam.fr/collections/detail/ateliers-du-forum-ircam-edition-speciale-spatialisation-arvr/\">https://forum.ircam.fr/collections/detail/ateliers-du-forum-ircam-edition-speciale-spatialisation-arvr/</a></p>",
        "topics": [
            {
                "id": 1124,
                "name": "cosmology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1125,
                "name": "multimedia",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 301,
                "name": "Virtual reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 702,
                "name": "Waves",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27090,
            "forum_user": {
                "id": 27063,
                "user": 27090,
                "first_name": "Cainy",
                "last_name": "Yiru Yan",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_0761_%E5%89%AF%E6%9C%AC.JPG",
                "avatar_url": "/media/cache/63/8f/638f3b80b67e2eeb3a0e7ab0f789aaa6.jpg",
                "biography": "Cainy Yiru Yan is a London-based interdisciplinary artist and immersive experience designer. Her practice spans extended reality (XR), audiovisual installations, sculptural practices, spatial sound, photography, film, documentary, digital art, live performances, and art prints. She explores overlooked narratives through post-existentialist thought, holistic systems, and Daoist philosophy, creating environments that dissolve the boundaries between materiality, spirituality, temporality, and human experience. Grounded in these philosophical foundations, her work investigates the fluid and interdependent relationships between space, material, memory, and human perception. Rather than imposing narratives, she invites audiences to encounter environments where decay and renewal, stillness and transformation, coexist. Through immersive technologies, spatial atmospheres, and multi-sensory experiences, Cainy crafts poetic spaces that invite audiences to engage with the invisible layers of memory, nature, and transformation. Her work has been exhibited internationally at venues such as IRCAM at the Centre Pompidou (FR), Kühlhaus Berlin (DE), the Royal Birmingham Society of Artists (UK), Flor",
                "date_modified": "2025-05-03T18:20:29.531747+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "yanyiru",
            "first_name": "Cainy",
            "last_name": "Yiru Yan",
            "bookmarks": []
        },
        "slug": "celestial-armillary-and-ubiquitous-wave-3",
        "pk": 2042,
        "published": true,
        "publish_date": "2023-02-06T19:23:49+01:00"
    },
    {
        "title": "DAFNE+ workshop: Reality Check for Das Wohlpräparierte Klavier by Philippe Manoury & Miller Puckette",
        "description": "This workshop animated by Phllippe Manoury and Miller Puckette presents how Reality Check is being used to guarantee the long-term viabliity of the new realization, in Pure Data, of Manoury's 2022 piece, Das Wohlprepärierte Klavier.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><span>Composer Philippe Manoury and researcher Miller Puckette will jointly present a new realization in Pure Data of Manoury's 2022 piece, Das Wohlprep&auml;rierte Klavier, which premiered in the Boulezsaal in Berlin, played by Daniel Barenbo&iuml;m.&nbsp; The new realization relies on the new continuous integration system, Reality Check, to ensure its continued playability and correctness.&nbsp; The workshop will place particular emphasis on how reality Check is being used to guarantee the long-term viabliity of the new realization.</span><br /><br /></p>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: 1012px; top: 71.7726px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 328,
                "name": "Pd",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2715,
                "name": "reality check",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "dafne-workshop-reality-check-for-das-wohlpraparierte-klavier-by-philippe-manoury-miller-puckette",
        "pk": 3330,
        "published": true,
        "publish_date": "2025-03-06T14:28:48+01:00"
    },
    {
        "title": "Media Specific Performance: the screen mediated productions during the Pandemic - Dudu TSUDA",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris by Dudu Tsuda.",
        "content": "<p>In this presentation, I am aiming to discuss the development of new possibilities of aesthetic explorations in sound art, experimental music and audiovisual performances, that emerged during the pandemic. I am interested in real-time audiovisual researches that were specific created in and for video call software applications and virtual environments such as Zoom and Mozilla Hubs. In that sense, new forms of art works that had to englobe structural and sensitive limitations of the interfaces &ndash; that were not designed for this end -, and that could only emerge in that extreme social situation. Therefore, an essay about how this technical environment transformation influenced the development of a new cultural context for performance practice, that includes not only the new forms of image exploration for the construction of the dramaturgy, but also the acceptance of this new kind of real-time art experience by the audience. I will problematize how this accelerated technical development of software applications was &ndash; and is still linked - with the growing precariousness of the social-political-economic conditions of Global South realities. Finally, I will present the performance &ldquo;Plurinational Hymn :: Life is an Utopia&rdquo; created in and for Zoom application. The performance problematizes the recent episodes of legal precarization of the demarcations of indigenous lands in Brazil realized by the alt-right fascist government Bolsonaro. In that sense, it questions the growing devaluation of life in face of the financial market, manifested in the recent environmental catastrophes and the systematic dismantling of social and human rights.</p>\r\n<p></p>\r\n<p>Dudu Tsuda</p>",
        "topics": [
            {
                "id": 1147,
                "name": "experimental art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1146,
                "name": "experimental music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1145,
                "name": "media specific",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 24481,
            "forum_user": {
                "id": 24454,
                "user": 24481,
                "first_name": "Dudu",
                "last_name": "Tsuda",
                "avatar": "https://forum.ircam.fr/media/avatars/FOTO.jpg",
                "avatar_url": "/media/cache/c0/66/c0668294b105d06697933654332f9b21.jpg",
                "biography": "Dudu Tsuda is multimedia artist, sound artist, musician, composer, performer, music producer and professor. Founder and director of the experimental music and sound art label ALEA experimental. Doctorate in Arts UNESP-SP. Master degree in Technologies of Intelligence and Digital Design PUC-SP. Bachelard in Multimedia and Communication PUC-SP. He realized residency programs, expositions, performances e concerts in different countries like France, Japan, Colombia, Bolivia, Spain, Germany and Brazil in institutions like the 7th. Biennial of Mercosul (Porto Alegre / Brazil), IX Biennial Siart of La Paz 2016 (La Paz / Bolívia), Centre Georges Pompidou ( Paris / France ), Cité Internationale des Arts de Paris (Paris / France), L’institut Français (Tokyo / Japan), TOKAS (Tokyo / Japan), Modern Art Museum of São Paulo (São Paulo / Brazil), Internationale Résidence aux Recollets (Paris / France), ECCO :: Museo de Arte Contemporaneo (Cadiz / Spain), Die Fäberei Showcase Dyeing / Schaufenster der Fäberei (Munich / Germany), Einstein Kultur Musik Theatre (Munich / Germany), Festival de la Imagen de Manizales (Manizales / Colombia) and Paço das Artes (São Paulo / Brazil).",
                "date_modified": "2023-02-09T16:17:39+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "dudu-tsuda-gmail-com",
            "first_name": "Dudu",
            "last_name": "Tsuda",
            "bookmarks": []
        },
        "slug": "media-specific-performance-the-screen-mediated-productions-during-the-pandemic",
        "pk": 2053,
        "published": true,
        "publish_date": "2023-02-09T22:15:55+01:00"
    },
    {
        "title": "Nouvelles de l'équipe S3AM (T. Hélie, R. Piéchaud, C. Picasso)",
        "description": "Dans cette présentation, nous présenterons des nouvelles sur :\r\n1) Modalys : un exemple de conception sonore utilisant les dernières fonctionnalités de Lua et des éléments finis dans Max en temps réel ;\r\n2) L'Escargot : la V3 en cours de développement, et le matériel de l'Escargot.\r\n3) Le projet ATRIM (en collaboration avec Buffet-Crampon) basé sur la technologie Snail appliquée à \"High Precision Real Time Pitch and Timbre Analyser\".",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sentation par:&nbsp;T. H&eacute;lie, R. Pi&eacute;chaud, C. Picasso<br /><a href=\"https://forum.ircam.fr/profile/helie/\">Biography Thomas H&eacute;lie</a></p>\r\n<p>Dans cette pr&eacute;sentation, nous pr&eacute;senterons des nouvelles sur :<br />1) <strong>Modalys</strong> : un exemple de conception sonore utilisant les derni&egrave;res fonctionnalit&eacute;s de Lua et des &eacute;l&eacute;ments finis dans Max en temps r&eacute;el ;<br />2) <strong>L'Escargot</strong> : la V3 en cours de d&eacute;veloppement, et le mat&eacute;riel de l'Escargot.<br />3) <strong>Le projet ATRIM</strong> (en collaboration avec Buffet-Crampon) bas&eacute; sur la technologie Snail appliqu&eacute;e &agrave; \"High Precision Real Time Pitch and Timbre Analyser\".</p>\r\n<p></p>\r\n<p><strong>&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement </a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18359,
            "forum_user": {
                "id": 18352,
                "user": 18359,
                "first_name": "Thomas",
                "last_name": "Helie",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f1890bd9d8d8ef5dc06f3accb2692adf?s=120&d=retro",
                "biography": "T. Hélie is Director of Research at CNRS. He is the head of the S3AM team at STMS laboratory  hosted at IRCAM and coordinator of the ATIAM MSc (Sorbonne Université). His field of research is on nonlinear dynamical systems, control theory, signal processing, acoustics, physical modeling of audio/musical instruments and voice. He has co-authored more than 120 publications in journals or proceedings, filled 2 patents, has been involved in several collaborative projects and currently coordinates 2 of them. He has supervised more than 10 PhD students and 30 MSc students. He has been a board member of the SMAER doctoral school since 2018, and involved in councils of the French Acoustic Society (Mission leader for the \"Olympiades de Physique France\", Musical Acoustics Group: elected member since 2011; Speech Acoustics Group: elected member since 2016). He was elected representative of researchers at STMS (2006-19) and is the STMS contact of the INS2I innovation unit (since 2021). One of his patented inventions will become one of the 5 manip'Icons of the Palais de la Découverte 2025, Paris.",
                "date_modified": "2026-02-16T13:32:55.998309+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 244,
                        "forum_user": 18352,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "helie",
            "first_name": "Thomas",
            "last_name": "Helie",
            "bookmarks": []
        },
        "slug": "news-from-the-s3am-team-t-helie-r-piechaud-c-picasso",
        "pk": 2839,
        "published": true,
        "publish_date": "2024-03-18T17:43:12+01:00"
    },
    {
        "title": "Pitch and dynamic transformation on human voice with deep neural networks",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<h3>The Crazy&nbsp;IRcam neural auto-encoderfor voiCE</h3>\r\n<p>The Crazy IRcam neural auto-encoder for voiCE (CIRCE)&nbsp;is the first transformation tool&nbsp;of the next generation of neural audio effects&nbsp;developed at the Analysis/Synthesis team at IRCAM.&nbsp;It provides a graphical interface&nbsp;for voice transformations based on a neural auto-encoder.&nbsp;The app allows changing different parameters&nbsp;in recordings of speech and singing voice.&nbsp;Currently pitch and vocal effort are supported in CIRCE,&nbsp;as well as additional experimental features.</p>\r\n<p>The transformations work with the new neural vocoding model&nbsp;which allows applying effects on the mel-spectrogram.&nbsp;A neural auto-encoder is trained&nbsp;to disentangle the pitch and the vocal effort&nbsp;from the mel-spectrogram of speech and singing voice.&nbsp;This way we obtain a representation of human voice&nbsp;that is independent of pitch and vocal effort&nbsp;and we can resynthesise the voice signal&nbsp;with almost arbitrary changes to the original material.&nbsp;As a result, the model automatically adapts&nbsp;the voice characteristics to the given parameters.</p>\r\n<p>In this presentation we will give a brief introduction&nbsp;to the mechanism of neural vocoding&nbsp;that is the technology behind this application.&nbsp;In the second part we provide install instructions,&nbsp;a demonstration of the features of the application&nbsp;as well as a few examples of transformed voice.</p>\r\n<p>The application was developed&nbsp;as part of the PhD thesis of Frederik Bous&nbsp;which was supervised by Axel Roebel.&nbsp;This research was funded by the ANR project ARS (ANR-19-CE38-0001-01).</p>",
        "topics": [],
        "user": {
            "pk": 25844,
            "forum_user": {
                "id": 25817,
                "user": 25844,
                "first_name": "Frederik",
                "last_name": "Bous",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7d4a430d8b19d965d6cccd1737e008f?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-09-11T11:05:26.724045+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 449,
                        "forum_user": 25817,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "bous",
            "first_name": "Frederik",
            "last_name": "Bous",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 25844,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "pitch-and-dynamic-transformation-on-human-voice-with-deep-neural-networks",
        "pk": 1350,
        "published": true,
        "publish_date": "2022-09-15T10:15:05+02:00"
    },
    {
        "title": "444wincom net",
        "description": "444win – Plataforma Completa de Apostas Online para Jogadores Brasileiros\n",
        "content": "<div>\n<div>\n<p><strong>444win: Uma Plataforma de Apostas Online que Une Tecnologia, Entretenimento e Confiabilidade</strong><br><br>O mercado de apostas online no Brasil continua em expans&atilde;o, impulsionado pelo avan&ccedil;o tecnol&oacute;gico e pela crescente procura por formas digitais de entretenimento. Nesse cen&aacute;rio din&acirc;mico, plataformas como a 444win se destacam ao oferecer uma proposta completa, que combina inova&ccedil;&atilde;o, praticidade e seguran&ccedil;a. Com uma estrutura bem planejada e foco na experi&ecirc;ncia do usu&aacute;rio, a 444win vem conquistando a aten&ccedil;&atilde;o de apostadores que buscam qualidade e confian&ccedil;a em um &uacute;nico ambiente.<br><br>Inicialmente, &eacute; importante destacar que a&nbsp;<a href=\"https://444wincom.net/\">444win</a>&nbsp;foi desenvolvida com uma abordagem centrada no usu&aacute;rio. A interface da plataforma &eacute; moderna e intuitiva, facilitando a navega&ccedil;&atilde;o mesmo para aqueles que n&atilde;o possuem experi&ecirc;ncia pr&eacute;via em apostas online. Todos os recursos est&atilde;o organizados de forma clara, permitindo acesso r&aacute;pido &agrave;s principais &aacute;reas, como esportes, cassino e promo&ccedil;&otilde;es. Al&eacute;m disso, a otimiza&ccedil;&atilde;o para dispositivos m&oacute;veis garante que os usu&aacute;rios possam acessar a plataforma com efici&ecirc;ncia tanto em smartphones quanto em computadores.<br><br>No que se refere &agrave;s apostas esportivas, a 444win apresenta uma ampla cobertura de eventos, atendendo aos mais variados interesses. O futebol ocupa posi&ccedil;&atilde;o de destaque, refletindo a paix&atilde;o nacional, mas outros esportes tamb&eacute;m est&atilde;o dispon&iacute;veis, ampliando as possibilidades de escolha. Os usu&aacute;rios podem explorar diferentes mercados, incluindo resultados, estat&iacute;sticas detalhadas e eventos espec&iacute;ficos durante as partidas. As apostas ao vivo, com odds atualizadas em tempo real, proporcionam uma experi&ecirc;ncia mais interativa e estrat&eacute;gica, tornando cada jogo ainda mais emocionante.<br><br>Al&eacute;m disso, o cassino online da&nbsp;<a href=\"https://444wincom.net/\">444wincom</a>&nbsp;complementa a experi&ecirc;ncia ao oferecer uma variedade significativa de jogos. A plataforma re&uacute;ne desde slots modernos, com gr&aacute;ficos atrativos e temas variados, at&eacute; jogos cl&aacute;ssicos como roleta, blackjack e bacar&aacute;. A presen&ccedil;a de mesas ao vivo com dealers reais eleva o n&iacute;vel de imers&atilde;o, permitindo que os jogadores sintam a atmosfera de um cassino tradicional sem sair de casa. Essa diversidade garante que diferentes perfis de usu&aacute;rios encontrem op&ccedil;&otilde;es adequadas ao seu estilo de jogo.<br><br>Outro ponto que merece destaque s&atilde;o os b&ocirc;nus e promo&ccedil;&otilde;es oferecidos pela 444win. A plataforma investe em incentivos que tornam a experi&ecirc;ncia mais atrativa, especialmente para novos usu&aacute;rios. O b&ocirc;nus de boas-vindas &eacute; uma das principais vantagens iniciais, permitindo come&ccedil;ar com saldo adicional. Al&eacute;m disso, promo&ccedil;&otilde;es cont&iacute;nuas, como cashback e rodadas gr&aacute;tis, ajudam a prolongar o tempo de jogo e aumentam as oportunidades de ganhos. O programa VIP tamb&eacute;m se mostra relevante, recompensando usu&aacute;rios frequentes com benef&iacute;cios exclusivos.<br><br>No aspecto financeiro, a 444win demonstra efici&ecirc;ncia ao oferecer m&eacute;todos de pagamento adaptados ao p&uacute;blico brasileiro. O PIX se destaca como uma op&ccedil;&atilde;o r&aacute;pida e pr&aacute;tica para dep&oacute;sitos, permitindo que os valores sejam creditados instantaneamente. Outras alternativas, como transfer&ecirc;ncias banc&aacute;rias e carteiras digitais, garantem flexibilidade para diferentes prefer&ecirc;ncias. Os saques s&atilde;o processados com agilidade, fator essencial para manter a confian&ccedil;a e a satisfa&ccedil;&atilde;o dos usu&aacute;rios.<br><br>A seguran&ccedil;a, por sua vez, &eacute; tratada como uma prioridade dentro da plataforma. A&nbsp;<a href=\"https://444wincom.net/\">slot444win</a>&nbsp;utiliza tecnologias avan&ccedil;adas de criptografia para proteger dados pessoais e financeiros, assegurando que todas as transa&ccedil;&otilde;es sejam realizadas de forma segura. Al&eacute;m disso, a verifica&ccedil;&atilde;o de identidade contribui para um ambiente mais confi&aacute;vel, reduzindo riscos de fraudes e garantindo maior transpar&ecirc;ncia nas opera&ccedil;&otilde;es.<br><br>Por fim, a facilidade de cadastro &eacute; um elemento que refor&ccedil;a a acessibilidade da 444win. O processo &eacute; simples, r&aacute;pido e pode ser conclu&iacute;do em poucos minutos. Ap&oacute;s a cria&ccedil;&atilde;o da conta e a realiza&ccedil;&atilde;o do primeiro dep&oacute;sito, o usu&aacute;rio j&aacute; pode aproveitar todas as funcionalidades dispon&iacute;veis. Essa praticidade contribui para atrair novos jogadores e consolidar a presen&ccedil;a da plataforma no mercado brasileiro.<br><br>Em s&iacute;ntese, a 444win se posiciona como uma plataforma de apostas online que alia tecnologia, entretenimento e confiabilidade. Com uma oferta diversificada de apostas esportivas, cassino completo, promo&ccedil;&otilde;es atrativas e alto n&iacute;vel de seguran&ccedil;a, ela atende &agrave;s expectativas de um p&uacute;blico cada vez mais exigente. Para quem busca uma experi&ecirc;ncia moderna e segura no universo das apostas, a 444win representa uma escolha s&oacute;lida e competitiva.<br><br>Website:&nbsp;<a href=\"https://444wincom.net/\">https://444wincom.net</a><br><br>Phone: +55 11 63172-5124<br><br>Address:R. 32 Q 91, 25 - Jd Aureni III, Palmas - TO, 77062-054, Brazil<br><br>Email:&nbsp;<a href=\"mailto:support247@444wincom.net\">support247@444wincom.net</a><br><br>Hashtags:&nbsp;<a href=\"https://businessboostier.mn.co/spaces/9298818/search?term=%23444win\">#444win</a>&nbsp;<a href=\"https://businessboostier.mn.co/spaces/9298818/search?term=%23slot444win\">#slot444win</a>&nbsp;# 444wincom</p>\n<div id=\"floating-emoji-picker-region\"></div>\n</div>\n<div>\n<div id=\"detail-layout-content-region\"></div>\n</div>\n<div id=\"detail-layout-attachment-region\"></div>\n<div id=\"course-join-region\"></div>\n<div>&nbsp;</div>\n</div>\n<div id=\"detail-layout-more-posts-region\"></div>\n<div id=\"detail-layout-user-info-region\"></div>\n<div id=\"detail-layout-cheers-region\"></div>\n<div id=\"detail-layout-reactions-region\"></div>\n<div id=\"detail-layout-post-stats-region\"></div>\n<div id=\"detail-layout-actions-bar-region\">\n<div>\n<div>&nbsp;</div>\n</div>\n</div>",
        "topics": [],
        "user": {
            "pk": 166445,
            "forum_user": {
                "id": 166208,
                "user": 166445,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/channels4_profile_bXoZUGn.jpg",
                "avatar_url": "/media/cache/0d/55/0d555bc5a75092a9097e89a30bd7d9b8.jpg",
                "biography": null,
                "date_modified": "2026-04-02T18:14:01.481698+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "444winconnet",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "444wincom-net",
        "pk": 4584,
        "published": false,
        "publish_date": "2026-04-02T18:14:51.877573+02:00"
    },
    {
        "title": "Embodme lance son contrôleur Erae Touch en financement participatif",
        "description": "Embodme, une startup fondée par des anciens élèves du Master Atiam de l'Ircam lance son premier instrument de musique sur la plateforme de financement participative Kickstarter:\r\nErae Touch - un contrôleur MIDI polyphonique et polyvalent !",
        "content": "<p><strong>Embodme</strong>, une startup incub&eacute;e dans le programme d'innovation du 104 et Agoranov lance son premier instrument de musique sur <a href=\"https://www.kickstarter.com/projects/erae-touch/erae-touch-the-expressive-music-controller-0?ref=5gsnc4\">Kickstarter</a>&nbsp;ce mercredi 14 octobre 2020.</p>\r\n<p>La soci&eacute;t&eacute; a &eacute;t&eacute; cr&eacute;&eacute;e par Edgar Hemery, ancien &eacute;l&egrave;ve du Master ATIAM de l'Ircam suite &agrave; sa th&egrave;se au sein de l'&eacute;cole des Mines ParisTech sur <em>\"l'&eacute;tude du mouvement et de l'interaction gestuelle et musicale\". </em>A la fin de cette th&egrave;se en 2018 il a obtenu un financement post-doc innovation qui lui a permis de d&eacute;velopper ses premiers prototypes, pr&eacute;sent&eacute;s notamment lors des derni&egrave;res journ&eacute;es d'<a href=\"https://medias.ircam.fr/embed/media/x6743c4\">Ateliers du Forum en Mars 2020.</a></p>\r\n<p>L'&eacute;quipe constitu&eacute;e d'anciens de l'<em>Ircam</em>, de <em>Devialet</em> et de <em>Squarp instruments</em> est d&eacute;sormais pr&ecirc;te &agrave; lancer les &eacute;tapes d'industrialisation et de commercialisation !</p>\r\n<p></p>\r\n<p style=\"text-align: center;\">&nbsp;</p>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"314\" src=\"//www.youtube.com/embed/uUowJfsUbYs\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p><em>Erae Touch&nbsp;</em>est un pad MIDI interactif et multi-touch utilisant une technologie de matrice de capteurs FSR permettant d'am&eacute;liorer la sensibilit&eacute; et le temps de r&eacute;ponse pour un jeu en temps-r&eacute;el. Ces innovations ont pour but d'apporter de nouvelles formes d'interactions gestuelles dans les performances musicales et &eacute;lectroniques.</p>\r\n<p>Une surface large faite de silicone augmente le potentiel expressif fa&ccedil;e aux contr&ocirc;leurs classiques. Ce nouvel outil permet en effet de jouer avec des baguettes autant qu'avec ses mains et d'utiliser des techniques de jeux jusqu'ici possibles seulement avec des instruments acoustiques (glissando, vibrato, pizzicato).</p>\r\n<p>&nbsp;</p>\r\n<p><img style=\"display: block; margin-left: auto; margin-right: auto;\" src=\"https://ksr-ugc.imgix.net/assets/030/992/282/24049c94cb2adf5912ce4362ae238dca_original.gif?ixlib=rb-2.1.0&amp;w=680&amp;fit=max&amp;v=1602623250&amp;auto=format&amp;gif-q=50&amp;q=92&amp;s=6ff70faea48b9eb4f6b0a4b346382272\" alt=\"MPE_Gestures\" width=\"680\" height=\"227\" /></p>\r\n<p>&nbsp;</p>\r\n<p>Le logiciel Erae Lab &eacute;galement d&eacute;velopp&eacute; par l'&eacute;quipe permettra aux utilisateurs de reconfigurer l'&eacute;cran de LED int&eacute;gr&eacute; au PAD et de cr&eacute;er une infinit&eacute; de <em>layouts </em>afin de personnaliser son <em>set</em> musical et de cr&eacute;er une r&eacute;elle interaction avec le public avec des visuels dynamiques et des objets musicaux immersifs.</p>\r\n<p><img style=\"display: block; margin-left: auto; margin-right: auto;\" src=\"/media/uploads/user/aa6c89f86dec71aecde86fe43fd0d06f.jpg\" alt=\"MusicObjects\" width=\"540\" height=\"100\" /></p>\r\n<p>&nbsp;</p>\r\n<p>Des connections standards ainsi que la cr&eacute;ation de templates sont pr&eacute;vues pour contr&ocirc;ler les logiciels de cr&eacute;ation num&eacute;riques ou faire revivre ses synth&eacute;tiseurs gra&ccedil;e &agrave; l'int&eacute;gration de la nouvelle norme MIDI 2.0.</p>\r\n<p>&nbsp;</p>\r\n<p>--</p>\r\n<p>Pour plus d'information rendez-vous sur leur site web: <a href=\"https://www.embodme.com/\" target=\"_blank\" rel=\"noopener\">https://www.embodme.com/</a></p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 399,
                "name": "Interactions gestuelles",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 489,
                "name": "Instrument",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 206,
                "name": "Interactive real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 74,
                "name": "Midi",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 226,
                "name": "Multimedia tools",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 487,
                "name": "Multitouch",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 488,
                "name": "Music interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 13258,
            "forum_user": {
                "id": 13255,
                "user": 13258,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f54147d5de2b55c4481a10b2700964a2?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Sphax",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "embodme-lance-son-premier-instrument-erae-touch",
        "pk": 762,
        "published": true,
        "publish_date": "2020-10-14T02:32:50+02:00"
    },
    {
        "title": "\"Spatial Sound Transformation toolkit for Max\" by Anders Tveit (Norway)",
        "description": "The Spatial Sound Transformation toolkit for Max is a range of Max modules in the form of bpatchers build around IRCAM´s Spat5 library and has been developed throughout my own artistic practice but also through lectures and workshops I have given in artistic and creative use of ambisonic over the past years.",
        "content": "<p></p>\r\n<p><strong>Spatial Sound Transformation toolkit for Max</strong></p>\r\n<p>A presentation and demo of my Spatial Sound Transformation toolkit for Max, which is a range of Max modules in the form of bpatchers build around IRCAM&acute;s Spat5 library and has been developed throughout&nbsp; my own artistic practice but also through lectures and workshops I have given in artistic and creative use of ambisonic over the past years.</p>\r\n<p>Combined with my interest in mapping, sonification and sound transformation have led me to create and explore artistic techniques of using (and misusing) ambisonics and other formats where these techniques emphasize thinking outside the box and establishing a direct relationship between shaping the material and &ldquo;bending&rdquo; the sounding space.</p>\r\n<p>The toolkit offers a range of such methods, where spatiality arises naturally from the sound&rsquo;s transformation rather than from a choreographed placement of point sources. Additionally, the toolkit simplifies both patching and working with ambisonics by providing automated channel and connection handling.</p>",
        "topics": [
            {
                "id": 1830,
                "name": "ambisonic ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3239,
                "name": "electroacoustic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3194,
                "name": "Max 9",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 274,
                "name": "Soundart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 42,
            "forum_user": {
                "id": 42,
                "user": 42,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/AndersTveit-credits-ThorEgilLeirtr%C3%B8-small.jpeg",
                "avatar_url": "/media/cache/a6/0f/a60f1c63e1a84acbb6d9be2d5208048e.jpg",
                "biography": "Anders Tveit (1977) is a composer and musician who works with electro‑acoustic composition, improvisation, and sound installations. The use of self‑developed software for real‑time processing and spatial sound plays a central role in his musical expression.\n\n“In my work, I am constantly interested in working with new approaches to the use of new technology, both in the composition process and in the performance. Something that for me, creates holistic thinking around the work and the artistic result. This often means that I like to work in such a way that the differences between developer, performer and composer are blurred, which I find interesting, challenging and exciting. ”\n\nTveit has composed multichannel electro-acoustic music works and sound-installations featured and performed at Ultima Contemporary MusicFestival, GRM-Paris, Only Connect Festival, ZKM-Karlsruhe, NIME, CCRMA-Stanford, KlangFest-Liechtenstein, Lydgalleriet, Sound/Image-London, Jauna Muzika-Lithuania, decibel Festival-Riga, Henie Onstad Art Center, Echochroma-Leeds, Aparte Festival for Experimental Arts and more.\n\nTveit an associate professor at the Norwegian Academy of Music.",
                "date_modified": "2026-03-03T09:55:16.488633+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "AndersTveit",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "spatial-sound-transformation-toolkit-for-max-by-anders-tveit-norway",
        "pk": 3761,
        "published": true,
        "publish_date": "2025-09-24T10:28:36+02:00"
    },
    {
        "title": "Whale Fall par - Armelle, Janmejay, Riya, Selin, Xuanbei",
        "description": "Résumé du projet Whale Fall, une expérience audiovisuelle immersive réalisée par des étudiants du Royal College of Art.\r\nÉquipe du projet : Selin Öztürk, Janmejay Singh, Armelle Mihailescu, Riya Mahajan, Xuanbei Penf",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;Selin &Ouml;zt&uuml;rk, Janmejay Singh, Armelle Mihailescu, Riya Mahajan, Xuanbei Peng<br /><a href=\"https://forum.ircam.fr/profile/selinozturk0205/\">Biographie Selin &Ouml;zt&uuml;rk</a><br /><a href=\"https://forum.ircam.fr/profile/janmejays/\">Biographie Janmejay Singh<br /></a><a href=\"https://forum.ircam.fr/profile/armellemihailescu/\">Biographie Armelle Mihailescu<br /></a><a href=\"https://forum.ircam.fr/profile/riyamahajan/\">Biographie Riya Mahajan</a><br /><a href=\"https://forum.ircam.fr/profile/quarkbei/\">Biographie Xuanbei Peng&nbsp;</a></p>\r\n<p></p>\r\n<p>Whale Fall\" est une installation immersive qui invite les spectateurs &agrave; plonger dans les profondeurs de l'oc&eacute;an. Elle propose un voyage de r&eacute;flexion dans le spectacle extraordinaire de la chute d'une baleine, un ph&eacute;nom&egrave;ne naturel qui se produit lorsqu'une baleine meurt et descend au fond de l'oc&eacute;an, devenant un catalyseur &eacute;ph&eacute;m&egrave;re mais essentiel pour les &eacute;cosyst&egrave;mes sous-marins.</p>\r\n<p>Le r&eacute;cit audio se d&eacute;roule avec le public dans un lieu en bord de mer, puis se d&eacute;place dans l'oc&eacute;an et, apr&egrave;s un impact, le public voyage avec la baleine morte dans sa descente vers le fond de l'oc&eacute;an. La surface de l'oc&eacute;an, remplie de d&eacute;chets plastiques et de filets fant&ocirc;mes, est une remarque &eacute;mouvante sur la pollution marine actuelle, qui incite &agrave; r&eacute;fl&eacute;chir &agrave; ses effets n&eacute;fastes sur la vie marine. Le spectateur est transport&eacute; sans transition pour assister &agrave; la m&eacute;tamorphose progressive du fond de l'oc&eacute;an, alors que la flore et la faune aquatiques s'&eacute;panouissent autour de la carcasse de la baleine, refl&eacute;tant ainsi la formation d'un &eacute;cosyst&egrave;me qui s'&eacute;tend g&eacute;n&eacute;ralement sur plusieurs d&eacute;cennies.</p>\r\n<p>Les spectateurs participent &agrave; cette exp&eacute;rience gr&acirc;ce &agrave; une conception sonore spatiale soigneusement &eacute;labor&eacute;e, qui intensifie l'atmosph&egrave;re immersive et compl&egrave;te la narration visuelle. Avec le mariage d'un paysage sonore des profondeurs et de visuels &eacute;vocateurs, nous visons &agrave; transcender les limites de la narration traditionnelle. Cette exp&eacute;rience multisensorielle invite &agrave; la contemplation de la fragilit&eacute; et de la r&eacute;silience des &eacute;cosyst&egrave;mes d'eaux profondes.</p>\r\n<p>Gr&acirc;ce &agrave; la technologie immersive et au design sonore, l'exp&eacute;rience sert de moyen pour favoriser une connexion plus profonde avec l'oc&eacute;an, en encourageant la prise de conscience et la gestion des habitats marins de notre plan&egrave;te.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1858,
                "name": "rca",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1859,
                "name": "royalcollegeofart",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1108,
                "name": "VR",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1860,
                "name": "whalefall",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 53801,
            "forum_user": {
                "id": 53739,
                "user": 53801,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f4fd21357e0250126256d02c0ec75165?s=120&d=retro",
                "biography": "Selin Öztürk is an Interior Architecture and Environmental Designer graduated from Bilkent University, Turkey. Currently she is continuing her postgraduate education in Royal College of Art in the major of Digital Storytelling. She is a artist who interested in designing sustainable and accessible interiors. She is also interested in photography and cinematography. During her bachelor, addition to interior architecture, she was getting courses from communication and graphic design. These courses shaped her decision to studying in storytelling RCA.",
                "date_modified": "2024-03-04T00:33:46.931491+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "selinozturk0205",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "whale-fall-by-armelle-janmejay-riya-selin-xuanbei",
        "pk": 2788,
        "published": true,
        "publish_date": "2024-03-03T12:33:32+01:00"
    },
    {
        "title": "Isomorphisms Between Time and Tone",
        "description": "The goal of this article is to provide a comprehensive introduction to the possibility of relating musical temporal and tonal structures. I explore the differences between the two musical domains and discuss potential points for connection. This subject demands the interaction between many fields of research in music, drawing on music theory, ethnomusicology, auditory perception and cognition, and acoustics.",
        "content": "<h2 class=\"p1\">Introduction</h2>\r\n<p>The musical domains of tone and time are commonly distinguished when discussing sound art. Different cultures and their musical traditions can be discerned by the various perspectives and systems they employ when engaging with these domains. There is evidence to support that various musics from around the world exhibit structural similarities across their manipulations of tone and time. These similarities can be mapped in the form of isomorphic or identical models. While perceptually, these domains appear strikingly different, the existence of such isomorphisms hints at the possibility that they share fundamental commonalities in the ways they are processed cognitively.</p>\r\n<p>Several articles explore these structural similarities from different perspectives (Pressings 1983, London 2002, Rahn 1975, Stevens 2004, Bar-Yosef 2007, etc.), alluding to a myriad of possibilities for connection. In the West, electronic music brought about considerations of time and tone&mdash; most notably, Stockhausen (in *Structure and Experimential Time* and *The Concept of Unity in Electronic Music*) and Grisey (in *Tempus Ex Machina*). The book, *Microsound*, by composer Curtis Roads tackles this subject and provides helpful references to other composers in this vein.</p>\r\n<p class=\"p1\">The goal of this paper is to identify and discuss models which can effectively relate various musical phenomena across the domains of time and tone. I will explore the theoretical understanding for such isomorphisms, both reviewing established and introducing new models. Beginning with a brief review on select physical properties of sound, I will explore the perceptual distinction which separates these two domains. This understanding will inform the investigation and evaluation of subsequent theoretical equivalences. I will compare the pertinence of models by comparing their ability relate unique or multiple musical phenomena. Evaluation will be judged through the perspectives oriented around perception/cognition, relevance to existing music (general musicology), and through the functional limitations of the models.</p>\r\n<h2 class=\"p1\">The Fundamental Problem</h2>\r\n<p class=\"p1\">When considering the perceptually disparate concepts of time and tone, it is helpful to begin at the instance in which both are physically identical: pulsation and frequency. A frequency is the rate at which air pressure cycles in and out of equilibrium. A pulsation is an onset which recurs at a single constant duration. For purposes of clarity, I will briefly ignore the specific onset/offset amplitude envelopes and refer to each unit of pressure disturbance as a grain- a sufficiently brief sonic impulse. We can merge these definitions by claiming that both frequency and pulsation are periods of pressure disturbance (or grains); regarding pulsation as a slow frequency (&lt;20Hz) and pitch as a fast pulsation (&gt;20Hz). For clarity, I associate pulsation with rhythm/time and frequency with pitch/tone in this paper.</p>\r\n<p class=\"p1\">In order to begin, it is first necessary to understand how a single physical stream of events manifests into the two disparate perceptions of tone and time. We can do this through a simple example. Take a chain of identical grains pulsating steadily at a rate of five times per second (5Hz). This chain will be heard as five discrete points within each passing second. However, as the grains repeat faster (the inter-onset interval (IOI) decreases) than 20 grains per second (20Hz), the perception of an individual grain is no longer possible. Where each individual grain was once identifiable, instead the IOI- the amount of time between each grain- is &ldquo;listened to&rdquo; and abstracted into a *tone*. The sensation of pitch occurs when grains are replaced by the perception of a temporally-abstract (&ldquo;atemporal&rdquo;) tone. In other words, discrete grains pulsating at 500Hz are not perceived as 500 events, but as one event&mdash; one single, static tone.</p>\r\n<p class=\"p1\">Through the example of a physical signal whose grain-rate (i.e. frequency) is manipulated, we can identify the perceptual difference that separates pulsation and tone: the abstraction of time. Pulsations that are perceived as &ldquo;faster&rdquo; or &ldquo;slower&rdquo; in the temporal domain are transformed into a tones and perceived as &ldquo;higher or lower&rdquo; in the pitch domain (or other associations, depending on the culture [1]). It is for this reason that we will, from hereon, consider all time-related events as belonging to the rhythmic domain. For clarity, the term onset and offset is extended to include even the most unclear attacks and decays.</p>\r\n<p class=\"p1\">We arrive at the basic disconnect between the two musical domains: the perception of time in pulsation and the perception of tone in frequencies. At slow cycle durations (pulsation), time between onsets is consciously available and onset/offset pairs are distinguishable; at fast cycle durations (frequency) time is not perceptible and individual onset/offset pairs are not accessible. For the rest of the paper, I will discuss time and tone as two distinct musical domains.</p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/710dfc9bce00c10d1748170b85eb61b4.png\" alt=\"Figure 1\" width=\"528\" height=\"218\" /></p>\r\n<p class=\"p1\"><em>Figure 1</em></p>\r\n<p class=\"p1\">Given this discrepancy, it is important to question why we can continue to search for isomorphisms between these two, non-isomorphic domains (Figure 1). While it is clear that there are vast perceptual differences between the two musical domains, it is interesting to observe that both domains are often approached through proportions (or &ldquo;intervals,&rdquo; above). It can be shown that similar cognitive structures serve to compare units within both the temporal and pitch domains.[2] Different musical traditions impose abstract, proportional &ldquo;maps&rdquo; onto these domains in the form of tonality, meters, scales, rhythmic patterns, etc. The prevalence of proportional systems across both domains is what encourages us to consider the relationship between these systems.</p>\r\n<p class=\"p1\">It is then pertinent to consider the units in either domain which facilitate these proportions, as well as how they can be related. I propose two possibilities for which units may be equated. The first explores durations, fundamental to both pitch and rhythm, as the units of these models. The second, later on in this paper, relates the audible members of each domain (marked in bold in Figure 1).</p>\r\n<h2 class=\"p1\">Basic Proportional Relationships</h2>\r\n<p class=\"p1\">The harmonic series is a common framework for evaluating integer-based proportional relationships between frequencies. A harmonic series is created by taking a reference frequency (the root) and multiplying it by incrementally-ascending integers. Instead of discussing each harmonic as a fixed frequency, we will use the harmonic series relatively in order to evaluate any<span class=\"Apple-converted-space\">&nbsp; </span>root value. For pulsation and frequency, we take the &ldquo;unison&rdquo; and &ldquo;whole note&rdquo; as references and multiply them by progressively-ascending integers.</p>\r\n<p class=\"p1\">In Figure 2, the bottom row of the left chart represents the root, and the rows above represent integer-multiples of the root. Each row contains three columns containing the pitch, rhythmic, and integer interval with respect to the root. In this first example, we will limit our consideration to a single row of the harmonic series at a time.</p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/701aa266b0ecdbf908b5fe4812d488ae.png\" alt=\"Figure 2\" width=\"300\" height=\"477\" /></p>\r\n<p class=\"p1\"><em>Figure 2</em></p>\r\n<p class=\"p1\">An interesting equivalence suggested by this model is between octaves and double time. Due to the fact that pitch tends to operate below proportions of &le;.5 and rhythm above &ge;.5,<span class=\"Apple-converted-space\">&nbsp; </span>it is precisely this 0.5, or 1:2 ratio, in which the two domains share a strong quality. I will consider in greater detail this shared quality later on in the paper.</p>\r\n<p class=\"p1\">Considering the connection between tone and time through single harmonic ratios does not prove to be very powerful for a number of reasons. The first disparity is that our &ldquo;range of hearing&rdquo; in the temporal domain is comparatively smaller than that of the pitch domain. The human range of beat perception occurs at 0.5-4Hz.[3] As a pulse grows to a length of 2 seconds, events dissociate and a pulse is undetectable. As a root is subdivided beyond than 20 times per second, the pulse converts into pitch. This rhythmic &ldquo;range of hearing,&rdquo; from 2 seconds to 1/20 seconds, forms a ratio of 1:40. Pitch, by contrast, can go from 20Hz to 20,000Hz[4]&ndash; 25 times larger than rhythm.</p>\r\n<p class=\"p2\"><img src=\"/media/uploads/user/ec1d73d3a4474f22daa75298b9da7cb9.png\" alt=\"Figure 3\" width=\"720\" height=\"124\" /></p>\r\n<p class=\"p1\"><em>Figure 3</em></p>\r\n<p class=\"p1\">The second disparity shown by this model arises when taking a musical perspective. In most musical cultures, scales partition an octave (1:2 ratio) into at least 4 parts. This means that, in the tonal domain, the root (or &ldquo;tonic&rdquo;) is able to form very complex ratios with simultaneous or subsequent pitches. Rhythm, however, 1) more commonly features much larger intervallic leaps, and 2) more often features ratios which can be reduced to 1:x (where x is a low integer) and subdivide by x recursively.</p>\r\n<p class=\"p1\">It is hypothesized that these rhythmic features are due to our cognitive tendency to relate all temporal events to a perceived beat (called rhythmic &ldquo;entrainment&rdquo;)[5] in combination with our preference for events to be grouped recursively in units of 1, 2, and 3 (dubbed &ldquo;subitization&rdquo;).[6] Essentially, a beat will be divided into a few small, usually evenly-spaced parts. Those parts are then similarly divided again. This process repeats until we see strong preferences for the ratios highlighted in Figure 2. In the figure, blue columns are products of 2-subdivisions (tuplets), yellow columns are products of 3-subdivisions (triplets), and green are products off 2- and 3-subdivisions. Pitch, however, does not exhibit these same tendencies.</p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/fb52066177d2568f189dbe1633282a2f.png\" alt=\"Figure 4\" width=\"250\" height=\"311\" /></p>\r\n<p class=\"p1\"><em>Figure 4</em></p>\r\n<h4><strong>x : (x+1)</strong></h4>\r\n<p class=\"p1\">Above, I considered one row of the harmonic series at a time. In this example, I look at two adjacent rows occurring simultaneously (cross-rhythms and dyads). Two adjacent rows form phasing ratios, meaning that the two pulse/frequency streams form a ratio of x:(x+1). While these ratios offer a very limited window to the harmonic series, they present characteristics useful for drawing isomorphisms.</p>\r\n<p class=\"p1\">Considering phasing relationships within pulsation and frequency has several advantages. First, two cycles going in and out of phase alignment in a periodic fashion results in a perceivable, higher-level structure. This higher-level structure is perceivable ubiquitously across both temporal and pitch domains. In time, the higher level is felt as a longer temporal cycle. The leftmost depiction in Figure 5 demonstrates how- at slower pulsations- a higher level arises from the oscillation between maximal alignment and misalignment of the grains. In tone, the higher level is perceived as a periodic amplitude modulation, called &ldquo;beating.&rdquo;[7] The rightmost depiction demonstrates how the amplitude envelopes of each grain combine, forming a &ldquo;hairpin&rdquo; shape in the amplitude. This higher level can even be perceived as a separate pitch if the cycling frequency is sufficiently high.[8] Simply put, everytime two phasing pulsations or frequencies are played, we identify the root (the &ldquo;1&rdquo;) such that the initial x:(x+1) becomes x:(x+1):1.</p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/e2d81e57a9901a5b667e2972d9ab1709.png\" alt=\"Figure 4\" width=\"336\" height=\"137\" /></p>\r\n<p class=\"p1\"><em>Figure 5a</em></p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/dd2fc746fdffaf8a84ef5dc07b31c05d.png\" alt=\"Figure 5b\" width=\"394\" height=\"129\" /></p>\r\n<p class=\"p1\"><em>Figure 5b</em></p>\r\n<p class=\"p1\">The next advantage of this approach is the ability for pulsations in the temporal domain to employ ratios beyond 1:x. With the opportunity for simultaneity, we are able to identify common ratios between the pitch and rhythm domains such as 2:3 and 3:4. These polyrhythms are extremely common across a vast part of the musical geography.[9] The pitch equivalents of these ratios depict themselves through the just-4th and just-5th dyads&ndash; also extremely common intervals. Perceptually-speaking, all phasing polyrhythms x:(x+1) in music generally have the same perceived &ldquo;beating&rdquo; quality, meaning that 6:7, for example, resembles 17:18 simply because they both phase.[10] Theoretically, there exists isomorphisms between any two phasing polyrhythms and dyads. From a performance perspective however, the fact that musicians experience much greater difficulty when attempting higher-integer phasing ratios in polyrhythms than in dyads argues against this general comparison.[11]</p>\r\n<h4>x : (x+n)</h4>\r\n<p class=\"p1\">It is interesting to note that, performance-wise, the ratio 2:x- which manifests itself in many examples within the pitch domain- is very easy to produce rhythmically. In an attempt to explore more complex intervals, such as 2:x, within the temporal domain, I consider what happens to x:(x+n) when n&gt;1. In other words, considering non-adjacent rows of the harmonic series.</p>\r\n<p class=\"p1\">Above, we saw that n=1yields a simple cycle away from and back to alignment. For n&gt;1 however, x and (x+n) must go through several rounds of phasing before re-aligning. Put simply, while approaching alignment, both streams cross each other at an imperfect alignment and go through additional round(s) of phasing before aligning. These extra phases occur n times. This process is visible in Figure 6&ndash; faux-alignments are depicted with dotted boxes. Clockwise, the three boxes depict a global alignment, followed by a local maximum where the 31-stream is lagging, and finally a local maximum where the 28-stream is lagging. As the values x and (x+n) increase, faux-alignments become more aligned and, at high enough values, become indiscernible from total alignment. This actually results in a perceived beating frequency of n, regardless of whether n is the common factor of both streams. Ultimately, the frequency of a beat between any two rows in the harmonic series is their difference, n.</p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/a451f117de869b2a64a8ba14f915a318.png\" alt=\"Figure 6\" width=\"300\" height=\"295\" /></p>\r\n<p class=\"p1\"><em>Figure 6</em></p>\r\n<p class=\"p1\">This phenomenon is perceivable in both low and high x-values (pulsations and frequencies). Though, mentioned above, faux-alignments are less convincing beat-replacements in low x-values (pulsations). Figure 6 is an example of a phasing pulsation with a beating pattern that, at a specific tempi, can either be perceived as one long beat or three faster beats. As mentioned however, from a musical perspective, the relative difficulty of producing such complex polyrhythms and their rarity compared to tone production argues against this comparison.</p>\r\n<p class=\"p1\">A disparity highlighted by this model appears in the temporal domain, where one pulsation is cognitively elected as the tactus- or reference pulse- in polyrhythms.[12] When one pulse is being entrained, any other pulse is heard relative- and subordinate-to the initial pulse. This is most likely due to our limited capacity of entraining to one single pulse at a time.[13] Pitch perception, unlike rhythm, does not elect a fixed reference pitch out of a dyad (in non-tonal hierarchical contexts) and has only a slight preference for the faster frequency.[14]</p>\r\n<p class=\"p1\">Finally, we can consider including more than two adjacent proportions. While it is common for frequencies to form ratios with three or more values which do not share common factors (e.g. dominant: 4:5:(6):7), it is virtually impossible to find a polyrhythmic equivalent in music or psychology literature. Similarly to our previous discussion on the &ldquo;range of hearing,&rdquo; here, pitch ultimately has a much higher tolerance for stacking non-inclusive proportions than rhythm.</p>\r\n<p class=\"p1\">To conclude, thinking of pitch and time in terms of their shared acoustical properties (pulses and frequencies) opens up many interesting avenues for inter-domain connections to be made. Contributions like double time and octave equivalence, beating, and the different proportional &ldquo;domains&rdquo; that pitch and rhythm commonly exemplify all offer valuable insight into further isomorphic models. Admittedly, an approach which focuses solely on proportional relationships between pitch and rhythm ultimately struggles to connect with music.</p>\r\n<h2 class=\"p1\">Divisive Cyclic Models</h2>\r\n<p class=\"p1\">A powerful route opens up when considering an abstraction of time in the pitch domain. We know that frequencies in octave relations are perceived with very high similarities.[15] That is, a frequency value x and its octave transpositions are nearly interchangeable. This phenomenon is called &ldquo;pitch class.&rdquo; Unlike frequencies, which map linearly, pitch classes can be mapped on a circle. To do this, we draw a line from any frequency x to it&rsquo;s octave 2x and wrap the ends together, forming a circle. The frequencies on the line between x and 2x retain their relative distances as they bend around. Suddenly, we have a circle depicting not the frequency values of points, but proportions/intervals between them. The opposite ends of the circle, which represent the greatest intervallic distance, are separated by a 1:2 interval (a tritone). This cycle can be demarcated by any number of pitch-classes in any interval pattern.</p>\r\n<p class=\"p1\">This carries the paper to the second model of pitch/time isomorphisms: cyclic models. We can draw an isomorphism between the cyclic models of pitch-class and rhythm. In the temporal domain, a cyclic model arises when a pulsation is wrapped onto itself so that a single onset serves as both the beginning and end of a fixed duration cycle. Mentioned earlier, this model draws an isomorphism between the two audible units of each domain: tones (in the form of pitch classes) and onsets. Pitch-classes demarcate points on the pitch-class circle, while onsets demarcate points on the rhythm circle. Isomorphisms can only be drawn from &ldquo;strong isomorphic&rdquo; circles; circles with the same number and spacing-pattern of divisions.</p>\r\n<p class=\"p1\">There are two hurdles to overcome in drawing this isomorphism. First is the issue of equating onset/offset and tones. Second is the initial disparity between time and tone. The disjunction between onset/offset and pitch is depicted in Figure 1. The audible members of each domain- tones and onset- exist at different levels of abstraction. While evenly-cycling onsets create a repeating duration, pitches (which are two repeating durations), however, give a proportion; a &ldquo;tonal interval.&rdquo; This means that intervals within the temporal cycle will not have the same values as intervals along the pitch cycle.</p>\r\n<p class=\"p1\">The difference manifests itself when considering the extremities of either circle. For the rhythm circle, the opposite ends will feature a ratio of 1:2; pitch, 1:2. In drawing this isomorphism, however, we can overlook this discrepancy. We are not attempting to equate the units of the circle. Instead, we are interested in the patterns of distances formed by points along both circle depictions.</p>\r\n<p class=\"p1\">The next hurdle, pointed out at the beginning of the paper, is mitigating time across the two models. In a rhythm circle, time always moves in one direction. In a pitch-class circle, time does not exist. This can be easily overcome using the simple depiction in Figure 7.</p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/92f2aeca96c0a97a20eb7bbf32bff3b3.png\" alt=\"Figure 6a\" width=\"285\" height=\"283\" /></p>\r\n<p class=\"p1\"><em>Figure 6a</em></p>\r\n<p class=\"p1\"><img src=\"/media/uploads/user/5d5db204d65c9f3d0ac6c5b11a4ff95b.png\" alt=\"Figure 6b\" width=\"284\" height=\"315\" /></p>\r\n<p class=\"p1\"><em>Figure 6b</em></p>\r\n<p class=\"p1\">Consider a ball which can travel along a circumference of the pitch-class or rhythm circle. A point sounds whenever the ball moves onto it. In a pitch class circle, the ball can be stagnant, slide, or skip instantaneously in any direction (higher, lower) across the real pitches. However, in a rhythm circle, the ball must continuously move in the same direction (forward) and pass between every point along the circle.</p>\r\n<p class=\"p1\">This connection ends up being quite powerful. The most notable isomorphisms comes from parallel intervallic patterns between temporal and pitch-class cycles. In his paper, &ldquo;Cognitive Isomorphisms between Pitch and Rhythm in World Musics,&rdquo; Pressing points out 24 pages worth of isomorphisms between common two 12-cycles: 12-tET in Western music and Sub-Saharan African 12-beat-cycle music. Beyond his discoveries, isomorphisms between expressive musical behaviors are also revealed by mapping them into these cycles. Pitch slides and inflections find their counterparts in rubato and tempo stretching. Detunings have their counterpart with rhythmic swinging. The direction of swing (relative to the beat) is isomorphic with the with sharp vs. flat detuning (relative to the scale).</p>\r\n<p class=\"p1\">The main disparity exemplified by this model is similar to the previous model. Musical rhythm commonly occurs as recursive divisions of the circle through small integer values. Pitch, as in many cultures, does not subdivide the circle recursively or as finely. This difference likely arises due to rhythm&rsquo;s tendency for recursive subdivision, where it is hypothesized that tonal scales (pitch-class sets) arise from a stacked interval.[16]</p>\r\n<p class=\"p1\">Ultimately, musical systems which employ equal divisions of a timespan (for rhythm) and an octave (for pitch) are prevalent in a vast number of musical cultures. This model is supported by a broad terrain for making connections from one tradition to another. Cyclic conceptions of music, being so prevalent, provide further platforms for exploration. Perhaps these models could take on a more literal form; a &ldquo;rhythm-class&rdquo; cycle or a linear pitch cycle rooted in hertz values.</p>\r\n<h2 class=\"p1\">Conclusion</h2>\r\n<p class=\"p1\">While I have only investigated a few models, there are many more paths to explore. We can venture down additive cyclic models, pitch vs. temporal hierarchies, isomorphic serial transformations, temporal equivalent for non-octaviating scales, tonal equivalent for rhythmic integration, etc. Several benefits arise from the pursuit of this topic. Having a greater understanding of these two musical domains could likewise advance the understanding of musics across cultures by proposing new venues for analysis.<span class=\"Apple-converted-space\">&nbsp; </span>Established isomorphic models could suggest new venues for analysis within the two domains themselves, opening up new terrains for both theorists, composers, and musicologists alike. Both connections and disparities that arise between the temporal and tonal domains work hand-in-hand with the cognitive processes guiding a listener. Addressing these models can clarify the nuances of music perception and cognition across the two domains.</p>\r\n<h4 class=\"p1\">Note</h4>\r\n<p class=\"p1\">As a student, I note that my understanding of this subject is still nascent and developing. My only goal in this article is to provide a comprehensive summary for new artists and analysts; for those looking to tackle contemporary theoretical issues and bring together knowledge from different fields in music. I wrote this because the idea of similar musical structures in time and tone pushed me to advance my knowledge in several fields: ethnomusicology, music theory, auditory perception and cognition, and acoustics. It pushed me to confront the limits of my understanding of important musical concepts, and- ultimately- develop better control of them. My hope is that this article can deliver these nuances to the reader.</p>\r\n<p class=\"p1\">- Julien Palli&egrave;re, 2019</p>\r\n<h2 class=\"p1\">Notes</h2>\r\n<p class=\"p1\">[1] Ashley, Richard. &ldquo;Musical Pitch Space Across Modalities- Spatial and Other Mappings Through Language and Culture.&rdquo; pp. 64&ndash;71.</p>\r\n<p class=\"p1\">[2] Krumhansl, Carol L. &ldquo;Rhythm and Pitch in Music Cognition.&rdquo; Psychological Bulletin, vol. 126, no. 1, 2000, pp. 159&ndash;179.</p>\r\n<p class=\"p1\">[3] London, Justin. &ldquo;Hearing in Time.&rdquo; 2004.</p>\r\n<p class=\"p1\">[4] Longstaff, Alan, and Alan Longstaff. &ldquo;Acoustics and Audition.&rdquo; Neuroscience, BIOS Scientific Publishers, 2000, pp. 171&ndash;184.</p>\r\n<p class=\"p1\">[5] Nozaradan, S., et al. &ldquo;Tagging the Neuronal Entrainment to Beat and Meter.&rdquo; Journal of Neuroscience, vol. 31, no. 28, 2011, pp. 10234&ndash;10240.</p>\r\n<p class=\"p1\">[6] Repp, Bruno H. &ldquo;Perceiving the Numerosity of Rapidly Occurring Auditory Events in Metrical and Nonmetrical Contexts.&rdquo; Perception &amp; Psychophysics, vol. 69, no. 4, 2007, pp. 529&ndash;543.</p>\r\n<p class=\"p1\">[7] Vassilakis, Panteleimon Nestor. &ldquo;Perceptual and Physical Properties of Amplitude Fluctuation and Their Musical Significance.&rdquo; 2001.</p>\r\n<p class=\"p1\">[8] Smoorenburg, Guido F. &ldquo;Audibility Region of Combination Tones.&rdquo; The Journal of the Acoustical Society of America, vol. 52, no. 2B, 1972, pp. 603&ndash;614.</p>\r\n<p class=\"p1\">[9] Pressing, Jeff, et al. &ldquo;Cognitive Multiplicity in Polyrhythmic Pattern Performance.&rdquo; Journal of Experimental Psychology: Human Perception and Performance, vol. 22, no. 5, 1996, pp. 1127&ndash;1148.</p>\r\n<p class=\"p1\">[10] Pitt, Mark A., and Caroline B. Monahan. &ldquo;The Perceived Similarity of Auditory Polyrhythms.&rdquo; Perception &amp; Psychophysics, vol. 41, no. 6, 1987, pp. 534&ndash;546.</p>\r\n<p class=\"p1\">[11] Peper, C. E., et al. &ldquo;Frequency-Induced Phase Transitions in Bimanual Tapping.&rdquo; Biological Cybernetics, vol. 73, no. 4, 1995, pp. 301&ndash;309.</p>\r\n<p class=\"p1\">[12] Handel, Stephen, and James S. Oshinsky. &ldquo;The Meter of Syncopated Auditory Polyrhythms.&rdquo; Perception &amp; Psychophysics, vol. 30, no. 1, 1981, pp. 1&ndash;9.</p>\r\n<p class=\"p1\">[13] Jones, Mari R., and Marilyn Boltz. &ldquo;Dynamic Attending and Responses to Time.&rdquo; Psychological Review, vol. 96, no. 3, 1989, pp. 459&ndash;491.</p>\r\n<p class=\"p1\">[14] Palmer, Caroline, and Susan Holleran. &ldquo;Harmonic, Melodic, and Frequency Height Influences in the Perception of Multivoiced Music.&rdquo; Perception &amp; Psychophysics, vol. 56, no. 3, 1994, pp. 301&ndash;312.</p>\r\n<p class=\"p1\">[15] Deutsch, Diana, and Edward M. Burns. &ldquo;Intervals, Scales, and Tuning.&rdquo; The Psychology of Music, Academic Press, 1999, pp. 252&ndash;256.</p>\r\n<p class=\"p1\">[16] &ldquo;Errata: Aspects of Well-Formed Scales.&rdquo; Music Theory Spectrum, vol. 12, no. 1, 1990, pp. 171&ndash;171.</p>\r\n<h4 class=\"p1\">Additional References</h4>\r\n<p class=\"p1\">Pressings, J. &ldquo;Cognitive Isomorphisms between Pitch and Rhythm in World Musics: West Africa, The Balkans and Western Tonality.&rdquo; 1975, Pp 38-61.</p>\r\n<p class=\"p1\">London, J. &ldquo;Some Non-Isomorphisms Between Pitch And Time.&rdquo; Journal of Music Theory, vol. 46, no. 1-2, Jan. 2002, pp. 127&ndash;151.</p>\r\n<p class=\"p1\">Rahn, John. &ldquo;On Pitch or Rhythm: Interpretations of Orderings of and in Pitch and Time.&rdquo; Perspectives of New Music, vol. 13, no. 2, 1975, p. 182.</p>\r\n<p class=\"p1\">Stevens, Catherine. &ldquo;Cross-Cultural Studies of Musical Pitch and Time.&rdquo; Acoustical Science and Technology, vol. 25, no. 6, 2004, pp. 433&ndash;438.</p>\r\n<p class=\"p1\">Bar-Yosef, Amatzia. &ldquo;A Cross-Cultural Structural Analogy Between Pitch And Time Organizations.&rdquo; Music Perception: An Interdisciplinary Journal, vol. 24, no. 3, 2007, pp. 265&ndash;280.</p>\r\n<p class=\"p1\">Stockhausen, Karlheinz. &ldquo;Structure and Experimential Time.&rdquo; Die Reihe, vol. 2, 1958, p. 64.</p>\r\n<p class=\"p1\">Stockhausen, Karlheinz, and Elaine Barkin. &ldquo;The Concept of Unity in Electronic Music.&rdquo; Perspectives of New Music, vol. 1, no. 1, 1962, p. 39.</p>\r\n<p class=\"p1\">Grisey, G&eacute;rard. &ldquo;Tempus Ex Machina:A Composers Reflections on Musical Time.&rdquo; Contemporary Music Review, vol. 2, no. 1, 1987, pp. 239&ndash;275.</p>\r\n<p class=\"p1\">Roads, Curtis. Microsound. MIT, 2004.</p>\r\n<p class=\"p1\">Zanto, Theodore P., et al. &ldquo;Neural Correlates of Rhythmic Expectancy.&rdquo; Advances in Cognitive Psychology, vol. 2, no. 2, Jan. 2006, pp. 221&ndash;231.</p>\r\n<p class=\"p1\">Grahn, Jessica A. &ldquo;Neural Mechanisms of Rhythm Perception: Current Findings and Future Perspectives.&rdquo; Topics in Cognitive Science, vol. 4, no. 4, 2012, pp. 585&ndash;606.</p>",
        "topics": [
            {
                "id": 95,
                "name": "Acoustics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3,
                "name": "Informatique musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 96,
                "name": "Contemporary",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 94,
                "name": "Ethnomusicology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 92,
                "name": "Music cognition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 93,
                "name": "Musicology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 91,
                "name": "Music theory",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 40,
                "name": "Orchestration",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 89,
                "name": "Pitch",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 88,
                "name": "Rhythm",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 90,
                "name": "Structure",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 87,
                "name": "Time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 86,
                "name": "Tone",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17613,
            "forum_user": {
                "id": 17609,
                "user": 17613,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/600956cf37cb3b863e4fb21341e94dee?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jjpalliere",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "isomorphisms-between-time-and-tone",
        "pk": 224,
        "published": true,
        "publish_date": "2019-06-20T22:14:19+02:00"
    },
    {
        "title": "Max Workshop: Assisted Composition and Orchestration",
        "description": "A workshop by Grégoire Lorieux, 25 Sept. 2025, Riga (Latvia)",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p>This hands-on workshop introduces participants to the use of the Max environment as a tool for assisted composition and orchestration. The session will cover patching strategies for algorithmic generation of musical material, interactive orchestration. Participants will learn to build compositional systems that respond dynamically to musical parameters, bridging the gap between human intuition and machine assistance. Prior experience with Max is recommended; example patches will be provided to ensure all attendees can actively participate.</p>\r\n<p>&nbsp;</p>\r\n<p>REQUIREMENTS for PARTICIPANTS:</p>\r\n<p>- a recent computer (Mac or Windows). Airdrop allowed for Mac users, a USB stick for Windows users.</p>\r\n<p>- Max 8 or 9 installed and authorized + <strong>bach</strong> &amp; <strong>cage</strong> Max libraries installed from Package Manager + <strong>orchidea</strong> Max library installed from orch-idea.org website.</p>\r\n<p><img src=\"/media/uploads/max_bach.png\" alt=\"\" width=\"378\" height=\"283\" /></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>\r\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 3044,
            "forum_user": {
                "id": 3042,
                "user": 3044,
                "first_name": "Gregoire",
                "last_name": "Lorieux",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/cd7913e7acfc03b53fbc5d9c30da67ce?s=120&d=retro",
                "biography": "Grégoire Lorieux is a composer, artistic director, and computer music designer, teaching at IRCAM. After studying early music and completing a master’s thesis on Kaija Saariaho, he studied composition with Philippe Leroux and at the Conservatoire de Paris, while joining IRCAM as a technology professor. In 2012, he took part in SPEAP at Sciences Po Paris with Bruno Latour, exploring connections between art, ecology, and social engagement. Active in education, he has led numerous projects combining creation and cultural outreach, such as IRCAM’s Ateliers de la Création and Paysages Composés with Ensemble Ars Nova and Quatuor Diotima. From 2013 to 2024, he was co-director of Ensemble Itinéraire. He taught electroacoustic composition at the Paris Conservatoire from 2019 to 2024. His musical language integrates electronics and French spectralism, exploring various formats from installations to concert works. In 2022, he founded Mondes Sonores, an open-air festival linking music and ecology.",
                "date_modified": "2026-02-27T15:38:40.219400+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 354,
                        "forum_user": 3042,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 25,
                                "membership": 354
                            },
                            {
                                "id": 599,
                                "membership": 354
                            },
                            {
                                "id": 655,
                                "membership": 354
                            },
                            {
                                "id": 781,
                                "membership": 354
                            },
                            {
                                "id": 917,
                                "membership": 354
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "lorieux",
            "first_name": "Gregoire",
            "last_name": "Lorieux",
            "bookmarks": []
        },
        "slug": "max-workshop-assisted-composition-and-orchestration",
        "pk": 3559,
        "published": true,
        "publish_date": "2025-07-17T11:26:57+02:00"
    },
    {
        "title": "Workshop: développement binaural dans Max/MSP - Marta Rossi",
        "description": "Workshop sur le développement binaural dans Max/MSP en utilisant la bibliothèque Spat5 de l'IRCAM",
        "content": "<p style=\"text-align: center;\"><a href=\"https://forum.ircam.fr/agenda/save-the-date-ateliers-du-forum-2024-edition-des-30-ans/detail/\"><img src=\"/media/uploads/bandeaux_articles.png\" width=\"990\" height=\"330\" /></a></p>\r\n<p>Pr&eacute;sent&eacute; par : Marta Rossi<br /><a href=\"https://forum.ircam.fr/profile/noone_511/\">Biographie</a></p>\r\n<p>-</p>\r\n<p>\"<strong>Binaural development in Max/MSP</strong>\" est un atelier pratique sur la fa&ccedil;on de travailler avec le format <strong>binaural</strong> dans Max/MSP en utilisant la biblioth&egrave;que <strong>Spat5</strong> de l'IRCAM. Les participants apprendront comment configurer le d&eacute;codage binaural pour les casques dans Max/MSP, comment encoder les sources sonores dans l'espace ambisonique 3D bas&eacute; sur les objets, comment extraire la position des sources sonores et les automatiser, comment utiliser le panoramique 3D pour cr&eacute;er un panoramique dynamique, et comment utiliser la r&eacute;verb&eacute;ration principale de Spat. L'atelier interactif sera pr&eacute;c&eacute;d&eacute; d'une br&egrave;ve explication technique de l'ambisonie et du format binaural, de sorte que le public disposera des outils th&eacute;oriques pour comprendre comment ils fonctionnent en g&eacute;n&eacute;ral et comment ils sont utilis&eacute;s dans Max/MSP.&nbsp;Les partcipants&nbsp;repartiront avec un patch fonctionnel pouvant &ecirc;tre appliqu&eacute; &agrave; n'importe quel projet.</p>\r\n<p>-<br />Il s'agit d'un atelier pour d&eacute;butants Spat5, mais des connaissances pr&eacute;alables de Max/MSP sont indispensables. Les participants auront besoin de leur propre ordinateur portable et d'un casque, intra-auriculaire ou supra-auriculaire.</p>\r\n<p>-<br />Avant l'atelier, si vous ne les avez pas d&eacute;j&agrave;, veuillez t&eacute;l&eacute;charger la version d'essai gratuite de 30 jours de Max/MSP <a href=\"https://cycling74.com/downloads\">ici</a> - et la biblioth&egrave;que Spat sur le <a href=\"https://forum.ircam.fr/projects/detail/spat/\">site du Forum de l'Ircam</a> - Spat est enti&egrave;rement gratuit mais vous devrez vous inscrire et vous connecter pour le t&eacute;l&eacute;charger. Max/MSP est install&eacute; via le programme d'installation, tandis que le dossier Spat doit &ecirc;tre copi&eacute; dans ../Documents/Max8/Packages sur MacOS et Windows.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8eb32d037edc6528c518dfd1d5cf3524.png\" /></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a></strong></p>",
        "topics": [
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1096,
                "name": "workshop",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 24289,
            "forum_user": {
                "id": 24262,
                "user": 24289,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Marta_bw.jpg",
                "avatar_url": "/media/cache/ac/6d/ac6d8d2a29bb4623262e8a954a192916.jpg",
                "biography": "Marta Rossi (aka NoOne) is an Italian composer, performer, sound and visual artist, based in the north-east of Scotland. Deeply interested in chaos and order relationships, where ordered macro-structures emerge from chaotic and unregulated behaviours, and in living beings-to-machine interactions, she’s engaged in an aesthetic-philosophical research on how to destabilise the subject-object hierarchy and on how we can take advantage of the idea and the experience of connections. In this path she organized unconventional events of electronic music and contemporary art; she collaborated with several artists in live and theatrical performances, and produced original soundtracks for independent short films. With her duo, Silent Chaos, she performed in many venues across Italy and UK (Cryptic Nights, Sound Festival, sonADA, Listen Again Festival, and others) and worked on five studio albums. In recent years their performances focussed on immersive A/V performances, and the use of sensors in installations, like Human AutomatArt, a sensors-based large generative graphics installation.",
                "date_modified": "2026-02-12T12:52:52.057537+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "noone_511",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 30,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 26,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 27,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 86,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 24289,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 24289,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "workshop-binaural-development-in-maxmsp",
        "pk": 2712,
        "published": true,
        "publish_date": "2024-02-07T10:44:49+01:00"
    },
    {
        "title": "Modular and Interfaced Spat5 SPAT in Max for Live device “TSofL” by Jing-Shiuan Tsang",
        "description": "“TSofL” (the SPAT Operator for Live) is a branch of Max for Live devices that features a modular and interfaced SPAT model made within the Max for Live environment. For Ableton Live users, it allows for up to 32 stereo sources and 64 speakers to be performed simultaneously. TSofL contains several devices: TSofL SPAT, TSofL Room, TSofL Source, and TSofL Matrix.\r\n\r\n“TSofL SPAT” is the main SPAT processing surface, enabling users to establish a module suitable for their specific requirements. While “TSofL Room” defines the room effects of the SPAT, “TSofL Source” defines the source effects.",
        "content": "<p style=\"font-weight: 400;\">&ldquo;TSofL&rdquo; (the SPAT Operator for Live) is a branch of Max for Live devices that features a modular and interfaced SPAT model made within the Max for Live environment. For Ableton Live users, it allows for up to 32 stereo sources and 64 speakers to be performed simultaneously. TSofL contains several devices: TSofL SPAT, TSofL Room, TSofL Source, and TSofL Matrix.</p>\r\n<p style=\"font-weight: 400;\">&ldquo;TSofL SPAT&rdquo; is the main SPAT processing surface, enabling users to establish a module suitable for their specific requirements. While &ldquo;TSofL Room&rdquo; defines the room effects of the SPAT, &ldquo;TSofL Source&rdquo; defines the source effects.</p>\r\n<p style=\"font-weight: 400;\">&ldquo;TSofL Matrix&rdquo; is a Max for Live device designed to receive multichannel outputs (primarily for TSofL SPAT or other multichannel M4L devices), record multichannel sound files, and perform bus sends within a single device. Its main usage includes quickly and efficiently assigning multichannel outputs to external outputs (64 inputs multiplied by 64 outputs) with just a few selections, multichannel volume control, and multichannel file recording, rather than adding numerous empty tracks and assigning them to individual destinations (channels).</p>\r\n<p style=\"font-weight: 400;\">&ldquo;TSofL&rdquo; also has a standalone version, &ldquo;TSofM,&rdquo; which includes <span>&ldquo;</span>TSofM SPAT&rdquo; and &ldquo;TSofM SPAT Send.&rdquo; The &ldquo;TSofM SPAT Send&rdquo; device processes the SPAT viewer, positions OSC, collects SPAT configurations inside Ableton Live, and communicates with &ldquo;TSofM SPAT.&rdquo; The &ldquo;TSofM SPAT&rdquo; handles all spatialization audio processes outside of Ableton Live (on Max), significantly reducing CPU usage in Ableton Live.</p>",
        "topics": [],
        "user": {
            "pk": 86096,
            "forum_user": {
                "id": 85993,
                "user": 86096,
                "first_name": "Karin",
                "last_name": "Laenen",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/65d11482a61a673c06dbdcf4cb9d156b?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-03-04T16:45:07.346631+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 944,
                        "forum_user": 85993,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 656,
                                "membership": 944
                            },
                            {
                                "id": 657,
                                "membership": 944
                            },
                            {
                                "id": 846,
                                "membership": 944
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "laenen",
            "first_name": "Karin",
            "last_name": "Laenen",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 86096,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "modular-and-interfaced-spat5-spat-in-max-for-live-device-tsofl-by-jing-shiuan-tsang",
        "pk": 3063,
        "published": true,
        "publish_date": "2024-10-23T12:10:42+02:00"
    },
    {
        "title": "Brain-Computer Music Interfacing for live performance by Jachin Edward Pousson.",
        "description": "Exploring the experience of controlling music and visuals with a BCMI system.",
        "content": "<p><strong>Music begins and ends in the human brain.&nbsp;</strong><br /><strong>Our bodies interface with musical instruments for the musical mind to express itself.&nbsp;</strong><br /><strong>The computer has become the musical instrument of our times enabling many new ways to play.&nbsp;</strong></p>\r\n<p>Brain-Computer Music Interfacing (BCMI) potentially takes these ideas a step further by considering the brain itself as musical instrument whose electroencephalography (EEG) signals can be harnessed to enable new channels and modes of expression in live performance. &nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/9c0ab7458bfa817b14dbb785effa729d.jpg\" /></p>\r\n<p>Biofeedback has been explored earnestly by the arts for the past 2 decades as a way to interact with or express oneself with one's own physiological state. BCMI systems use the EEG signal, algorithmically transforming and mapping it to outputs in formats such as MIDI, OSC or DMX useful for controlling media. Research aimed at how to obtain, process, and transform the EEG signal into useful parameters for live performance has been ongoing at JVLMA since 2019.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/4734273bdb4cee8323d5a1b809919e2d.png\" /></p>\r\n<p>The BCMI system developed within the frames of this project was based on decoding the expressive intentions of a performer in two contrasting states: high arousal and low arousal. This was done by characterising spectral power during emotionally expressive music performance relative to emotionally neutral music performance. This paradigm has been explored in various live concert settings from modular synthesis to orchestral percussion.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/7f64aa4e4a8533f7090c3b807d61cb24.jpg\" /></p>\r\n<p>Current efforts aim to extend these tools for multiple users, in which inter-brain dynamics during co-creative tasks can be used to manipulate immersive multimedia. This would enable shared brain activity to play a role in the creation or experience of art.</p>\r\n<p><a href=\"https://www.jachinpousson.com/research\" title=\"Portfolio site\">Research page</a></p>\r\n<p><a href=\"https://orcid.org/0000-0001-6215-1099\" title=\"Orcid link\">Publications</a></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 562,
                "name": "Bcmi",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 565,
                "name": "Biofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1786,
                "name": "EEG",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 156,
                "name": "Live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 127247,
            "forum_user": {
                "id": 127079,
                "user": 127247,
                "first_name": "Jachin Edward",
                "last_name": "Pousson",
                "avatar": "https://forum.ircam.fr/media/avatars/ES_IMG_9849.jpg",
                "avatar_url": "/media/cache/40/ae/40aeea10010ab2c3f05d29939b639c2e.jpg",
                "biography": "Jachin Edward Pousson (1983 USA), lived in Singapore and Copenhagen before moving to Riga in 2012. His is educated in Composition (BA and MA), Systematic Musicology (PhD), and currently holds a research position at JVLMA specializing in Brain-Computer Music Interface (BCMI) design. From 2019 onwards his research has used the electroencephalography (EEG) method to study brain dynamics during embodied music interaction and has applied outcomes to develop tools for harnessing the EEG signal in live performance. His artistic activities include composing, performing, producing and publishing academic, free improvisation, electronic and experimental folklore music.",
                "date_modified": "2025-08-04T11:09:12.055288+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jpousson",
            "first_name": "Jachin Edward",
            "last_name": "Pousson",
            "bookmarks": []
        },
        "slug": "brain-computer-music-interfacing-for-live-performance",
        "pk": 3596,
        "published": true,
        "publish_date": "2025-08-04T10:50:07+02:00"
    },
    {
        "title": "Koral: Playing music collectively using smartphones as musical instruments by Frederic Bevilacqua, Charles-Edouard de Surville and Damien Barbaza",
        "description": "Developed by the Association Arts Convergences in partnership with Ircam (ISMM team), KORAL is an application for playing music intuitively and playfully in a group, transforming smartphones into musical instruments.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"><img src=\"/media/uploads/koral-entete.png\" alt=\"\" width=\"1481\" height=\"554\" /></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">Presented by Frederic Bevilacqua, Charles-Edouard de Surville, Damien Barbaza</div>\r\n<div class=\"c-content__button\"><a href=\"https://forum.ircam.fr/profile/bevilacq/\" target=\"_blank\">Biography</a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<p>Developed by the Association Arts Convergences in partnership with Ircam (ISMM team), KORAL is an application for playing music intuitively and playfully in a group, turning smartphones into musical instruments. Simple and customizable gestures can trigger musical sounds and patterns that are automatically and harmoniously integrated into a collective music piece. This approach aims at stimulating creative expression and encouraging mutual listening and support.</p>\r\n<div></div>\r\n<div>The goal is to provide health and social organizations with an effective solution for running free music workshops. The KORAL application was developed using a user-centred approach, conducting workshops and adjusting functionalities based on user feedback. More than hundred people, with a large range of profiles, participated in 40 workshops held by the Association Arts Convergences, between January and July 2024. Beyond testing the application and making it more reliable, the KORAL workshops, very well received, confirmed the relevance of the approach.</div>\r\n<div></div>\r\n<div>The application takes advantage of developments by the ISMM team, in particular the Comote application for smartphones (Apple and Android) and a series of Max For Live plugins developed on the basis of the MuBu for Max package.</div>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 790,
                "name": "Comote",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2736,
                "name": "Forum 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 639,
                "name": "ISMM Team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 61,
                "name": "Mubu",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 21,
            "forum_user": {
                "id": 21,
                "user": 21,
                "first_name": "Frederic",
                "last_name": "Bevilacqua",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a5c31b02a13ce493dbe36917564770e5?s=120&d=retro",
                "biography": "Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris, in the joint research lab Science & Technology for Music and Sound – IRCAM – CNRS – Sorbonne Université. His research concerns the interaction between movement and sound and the development of gesture-based interactive systems, with applications in performing arts, education and health.\n\nHe holds a MS in physics and a Ph.D. in Biomedical Optics from EPFL. He  studied music at the Berklee College of Music in Boston. From 1999 to 2003 he was a researcher at the Beckman Laser Institute at the University of California Irvine. In 2003 he joined IRCAM as a researcher on gesture analysis for music and performing arts.\n\nHe co-authored more than 150 scientific publications and co-authored 5 patents. He was keynote or invited speaker at several international conferences such as the ACM TEI’13. He was awarded in 2011 the 1st Prize of the Guthman Musical 1st Prize of the Guthman Musical Instrument Competition (Georgia Tech) and received the award “prix ANR du Numérique” from the French National Research Agency (2013).",
                "date_modified": "2026-01-25T21:51:30.597035+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 12,
                        "forum_user": 21,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-17",
                        "type": 0,
                        "keys": [
                            {
                                "id": 270,
                                "membership": 12
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "bevilacq",
            "first_name": "Frederic",
            "last_name": "Bevilacqua",
            "bookmarks": []
        },
        "slug": "koral-jouer-de-la-musique-en-groupe-en-transformant-les-smartphones-en-instruments-de-musique",
        "pk": 3349,
        "published": true,
        "publish_date": "2025-03-11T12:07:43+01:00"
    },
    {
        "title": "Somax2 version 2.5 is out !",
        "description": "A new version fully max-programmable and with brand new set of tutorials, help, guides and videos",
        "content": "<p>Somax2 version 2.5 has seen a complete object redesign and modularisation so every object can be used in application style with full UI or rather in max library style with full programmability / messaging.</p>\r\n<p>The documentation and help has been updated for the entire package. This includes:</p>\r\n<ul>\r\n<li><a href=\"https://github.com/DYCI2/Somax2/blob/master/Somax2%20User's%20Guide.pdf\">A new, comprehensive user's guide</a></li>\r\n<li><a href=\"https://vimeo.com/showcase/somax2-tutorials\">4 video tutorials to learn Somax2</a></li>\r\n<li><a href=\"https://vimeo.com/showcase/somax2-demos\">5 videos demos on advanced performance usage</a></li>\r\n<li>An interactive overview for centralized access to all help, tutorials and templates (somax2.overview.maxpat)\r\n<ul>\r\n<li>New maxhelps and reference pages for all objects</li>\r\n<li>2 step-by-step tutorials for Somax2 application users</li>\r\n<li>7 step-by-step tutorials for Max programmers</li>\r\n<li>3 performance strategies tutos for different ways of using Somax musically</li>\r\n<li>4 templates to quickly start using Somax concert apps</li>\r\n</ul>\r\n</li>\r\n</ul>\r\n<p>Goto to <a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">Somax2 Forum page</a> for installation</p>\r\n<p>See more at <a href=\"http://repmus.ircam.fr/somax2\">Somax2 Project Page&nbsp;</a></p>\r\n<p>Somax2 is an application for co-improvisation and composition. It is implemented in Max and is based on a generative model &nbsp;to provide stylistically coherent improvisation, while in real-time listening to and adapting to musicians (or any other type of audio or MIDI source including other Somax agents).</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1200,
                "name": "cocreativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 545,
                "name": "Repmus team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "somax2-version-25-is-out",
        "pk": 2168,
        "published": true,
        "publish_date": "2023-03-30T07:54:12+02:00"
    },
    {
        "title": "Tutoriel Modalys n°8 : The Hybrid",
        "description": "Huitième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p><strong>Dans ce tutoriel, nous essayons un instrument hybride.</strong></p>\r\n<p></p>\r\n<p>Une des grandes caract&eacute;ristiques de Modalys est la possibilit&eacute; de fabriquer un objet hybride. Vous pouvez m&eacute;langer deux ou trois instruments et en faire un \"hybride\". Vous pouvez faire simple comme dans le tutoriel ou m&eacute;langer des instruments plus complexes et faire des exp&eacute;riences de toutes sortes de fa&ccedil;ons.</p>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/s4OYWnJ_BuA\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: left;\"><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; poar Olav Lervik.&nbsp;</strong></p>",
        "topics": [
            {
                "id": 467,
                "name": "Hybrid",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n8-the-hybrid",
        "pk": 730,
        "published": true,
        "publish_date": "2020-10-27T10:00:00+01:00"
    },
    {
        "title": "Continuum, l'expérience augmentée du spectacle vivant dans sa dimension sonore -  Hugues Vinet, Markus Noisternig, Gildas Dussauze, Gaëtan Byk",
        "description": "Présenté lors des Ateliers du Forum Ircam 2023 à Paris.",
        "content": "<p style=\"font-weight: 400;\"><img src=\"/media/uploads/capture_d&rsquo;&eacute;cran_2023-03-16_&agrave;_14.57.40_-_modifi&eacute;.jpg\" width=\"1382\" height=\"590\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"font-weight: 400;\"></p>\r\n<p style=\"font-weight: 400;\">L&rsquo;objet de cette table-ronde est de pr&eacute;senter le projet Continuum, coordonn&eacute; par l&rsquo;Ircam, en collaboration avec les soci&eacute;t&eacute;s Amadeus et VRtuoz. <span>&nbsp;</span>Continuum est soutenu par l&rsquo;&Eacute;tat dans le cadre du dispositif &laquo; Exp&eacute;rience augment&eacute;e du spectacle vivant &raquo;<span>&nbsp; </span>de la fili&egrave;re des industries culturelles et cr&eacute;atives (ICC) de France 2030, op&eacute;r&eacute;e par la Caisse des D&eacute;p&ocirc;ts. La table-ronde comprendra &eacute;galement une pr&eacute;sentation de VRtuoz et Amadeus et de leurs projets r&eacute;cents.</p>\r\n<p style=\"font-weight: 400;\">Continuum d&eacute;signe une nouvelle conception de la production et de la diffusion du spectacle vivant augment&eacute; dans ses dimensions sonores : un continuum entre sc&egrave;nes r&eacute;elles et virtuelles, entre artiste et spectateur-visiteur-auditeur d&rsquo;aujourd&rsquo;hui et entre innovation technologique fran&ccedil;aise et productions culturelles aux formats innovants. Porteur d&rsquo;un nouveau standard qualitatif de l&rsquo;immersion sonore, ce programme &agrave; la pointe de l&rsquo;&eacute;tat de l&rsquo;art de la recherche technologique s&lsquo;attache au d&eacute;veloppement d&rsquo;une cha&icirc;ne de production compl&egrave;te, de la captation &agrave; la restitution finale, permettant de cr&eacute;er et de transmettre aux spectateurs un contenu spatialis&eacute; sur diff&eacute;rentes plateformes d&rsquo;&eacute;coute et d&rsquo;interactions individualis&eacute;es et de le diffuser dans des lieux de diff&eacute;rentes configurations (salles de spectacle, espaces publics ou priv&eacute;s). Les fonctions vis&eacute;es sont exp&eacute;riment&eacute;es et valid&eacute;es &agrave; travers un ensemble de cr&eacute;ations remarquables et leur commercialisation concourt &agrave; leur d&eacute;mocratisation.</p>\r\n<p style=\"font-weight: 400;\"></p>\r\n<p style=\"font-weight: 400;\"></p>\r\n<p style=\"font-weight: 400;\"><img src=\"/media/uploads/sans_titre-7.png\" alt=\"\" width=\"1200\" height=\"150\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"font-weight: 400;\"></p>",
        "topics": [],
        "user": {
            "pk": 18210,
            "forum_user": {
                "id": 18203,
                "user": 18210,
                "first_name": "Hugues",
                "last_name": "Vinet",
                "avatar": "https://forum.ircam.fr/media/avatars/Hugues_Vinet_Portrait2017_large_low.jpg",
                "avatar_url": "/media/cache/4c/92/4c92397e1e69913141f89327eccc6007.jpg",
                "biography": "Hugues Vinet is Director of Innovation and Research Means of IRCAM. He has managed all research, development and innovation activities at IRCAM since 1994. He co-founded and ran for several terms the STMS (Science and Technology of Music and Sound) joint lab with French Ministry of Culture, CNRS and Sorbonne Université. He previously worked at the Groupe de Recherches Musicales of National Institute of Audiovisual in Paris where he managed the research and designed the first versions of the award winning real-time audio processing GRM Tools product. He has coordinated many collaborative R&D projects including recently H2020 VERTIGO in charge of the STARTS Residencies program managing 45 residencies of artists with technological research projects throughout Europe. He is currenty IRCAM's PI for EU MediaFutures project (artistic residencies for innovation in media) and DAFNE+ project dedicated to creatives' communities based on blockchain/NFT/DAO. He also curates the Vertigo Forum art-science yearly symposium at Centre Pompidou. He participates in various bodies of experts in the fields of audio, music, multimedia, information technology and innovation.",
                "date_modified": "2026-02-26T18:55:39.688865+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 417,
                        "forum_user": 18203,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-21",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "vinet",
            "first_name": "Hugues",
            "last_name": "Vinet",
            "bookmarks": []
        },
        "slug": "continuum-lexperience-augmentee-du-spectacle-vivant-dans-sa-dimension-sonore-hugues-vinet-markus-noisternig-gildas-dussauze-gaetan-byk",
        "pk": 2147,
        "published": true,
        "publish_date": "2023-03-16T15:01:35+01:00"
    },
    {
        "title": "Auditory Neurofeedback, Embodiment Cognition Control of Spatial Audio Objects and Brainwave-Modulated Generative Music within Cyber-feminist Practice by Zap Bain",
        "description": "A live demonstration presenting Zap Bain's artistic research on real-time EEG brainwave control of spatial audio systems for auditory neurofeedback and generative music performance within the context of cyber-feminism.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>This demonstration presents Zap Bain's artistic research on real-time EEG brainwave control of spatial audio systems for auditory neurofeedback and generative music performance. Building on her October 2025 demonstration at Monom Studios' 72-speaker 4DSOUND system at Funkhaus, this presentation showcases ambisonic work developed with SPAES using SPAT and boids algorithms. The live demo illustrates voluntary brainwave control of spatial audio objects through embodied cognition techniques including movement, neural asymmetry, vestibular system engangement and brainwave frequency ratios. The presentation integrates practical demonstration with theoretical frameworks, examining her cyber-feminist methodology and her groundbreaking voluntary alpha-theta-gamma control achieved through sustained practice.</p>\r\n<p>&nbsp;</p>\r\n<p>Zap Bain is a musician, sound artist/engineer and critical theorist&nbsp;with an artistic research focus on auditory neurofeedback, sound-based&nbsp;biomimicry and spatial audio. She is completing her MA thesis at UdK&nbsp;Berlin Sound Studies and Sonic Arts on a spatial audio neurofeedback system. Her work is presented as live EEG brainwave-modulated hardware performances that&nbsp;range from experimental pop-music to generative sound art and interactive&nbsp;sound installations.</p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/call-parisemghien-zap-bain-projectpicture2.png\" alt=\"\" width=\"2588\" height=\"1598\" /></p>",
        "topics": [],
        "user": {
            "pk": 29083,
            "forum_user": {
                "id": 29055,
                "user": 29083,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_0921.jpg",
                "avatar_url": "/media/cache/fb/2f/fb2f6151b23114abd49972f57f0daa86.jpg",
                "biography": null,
                "date_modified": "2026-02-24T13:29:16.877421+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "zap",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "auditory-neurofeedback-embodiment-cognition-control-of-spatial-audio-objects-and-brainwave-modulated-generative-music-within-cyber-feminist-practice",
        "pk": 4319,
        "published": true,
        "publish_date": "2026-02-05T13:46:23+01:00"
    },
    {
        "title": "Atelier In Situ Polytopes",
        "description": "Atelier de création sonore dans un contexte audiovisuel et immersif",
        "content": "<p>Encadrement p&eacute;dagogique : <strong><a href=\"https://www.tremenss.com/\">TremensS</a></strong> et <strong>T&eacute;n&egrave;bre</strong> (<a href=\"https://www.experienss.com/\">ExperiensS</a> / 665.99), <strong>Pierre Carr&eacute;</strong> (r&eacute;alisateur en informatique musicale de l&rsquo;Ircam)<br />Du lundi 16 au samedi 21 juin 2025 au CENTQUATRE-PARIS</p>\r\n<p>En 1972, le <em>Polytope de Cluny</em> de <a href=\"https://brahms.ircam.fr/fr/iannis-xenakis\">Iannis Xenakis</a> r&eacute;alise l&rsquo;alliance rare d&rsquo;un art savant et populaire : une exp&eacute;rience immersive sonore et visuelle, prototype de nombreuses &oelig;uvres multim&eacute;dias. La bande sonore spatialis&eacute;e, les cr&eacute;pitements de flash, les figures dessin&eacute;es par les rais de lasers et refl&eacute;t&eacute;es par les miroirs&hellip; cet ovni plongeait le spectateur dans un ouragan de lumi&egrave;res et de musique &eacute;lectronique.</p>\r\n<p>En juin 2022, le studio d&rsquo;art num&eacute;rique ExperiensS et Pierre Carr&eacute; (Ircam) r&eacute;adaptaient pour la premi&egrave;re fois le Polytope dans son &eacute;chelle monumentale originale, avec en regard une nouvelle cr&eacute;ation du collectif italien /nu/thing x ExperiensS, <em>Where You There at the Beginning,</em> suivi pour sa nouvelle &eacute;dition 2025 de deux nouvelles cr&eacute;ations &eacute;lectroniques, avec 665.99 (TremensS x T&eacute;n&egrave;bre) et CHLO&Eacute;.</p>\r\n<p>Pour l&rsquo;acad&eacute;mie de ManiFeste, TremensS, T&eacute;n&egrave;bre et Pierre Carr&eacute; proposent, dans l&rsquo;enceinte m&ecirc;me de l&rsquo;installation du Polytope 2025, d&rsquo;explorer les enjeux cr&eacute;atifs, exp&eacute;rientiels et techniques propres &agrave; une installation ou performance audiovisuelle immersive.</p>\r\n<h3>Candidatures</h3>\r\n<p>Les candidat&middot;e&middot;s doivent :</p>\r\n<ul>\r\n<li>&ecirc;tre n&eacute;&middot;e&middot;s apr&egrave;s le 1er janvier 1993 ;</li>\r\n<li>ne pas avoir particip&eacute; par deux fois d&eacute;j&agrave; &agrave; un autre atelier de l&rsquo;acad&eacute;mie ManiFeste ;</li>\r\n<li>pouvoir s&rsquo;exprimer et comprendre l&rsquo;anglais ou le fran&ccedil;ais.</li>\r\n</ul>\r\n<p><strong>Date limite de candidature</strong> mercredi 15 janvier 2025, 10h CEST.</p>\r\n<p><strong>Plus de d&eacute;tails <a href=\"https://www.ircam.fr/transmission/manifeste/academie/in-situ-polytopes&nbsp;\">https://www.ircam.fr/transmission/manifeste/academie/in-situ-polytopes&nbsp;</a></strong></p>\r\n<p><strong><img src=\"/media/uploads/polytopes_2009_v3_(c)_stephane_sby_balmy.jpg\" alt=\"\" width=\"1014\" height=\"677\" /></strong></p>\r\n<p><strong>Cr&eacute;dit photo : Stephane Sby Balmy</strong></p>",
        "topics": [
            {
                "id": 2445,
                "name": "CENTQUATRE",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2446,
                "name": "ExperiensS ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2447,
                "name": "TremensS",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17721,
            "forum_user": {
                "id": 17716,
                "user": 17721,
                "first_name": "Natacha",
                "last_name": "Moenne-Loccoz",
                "avatar": "https://forum.ircam.fr/media/avatars/1517-IRCAM-MANIF19--VISUEL-0-TheHouse1-Web.jpg",
                "avatar_url": "/media/cache/83/72/8372e1d360cd768ede652baeed45a1fb.jpg",
                "biography": null,
                "date_modified": "2024-12-12T15:36:41.115903+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 206,
                        "forum_user": 17716,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "moennelo",
            "first_name": "Natacha",
            "last_name": "Moenne-Loccoz",
            "bookmarks": []
        },
        "slug": "atelier-in-situ-polytopes",
        "pk": 3154,
        "published": true,
        "publish_date": "2024-12-12T14:47:26+01:00"
    },
    {
        "title": "Tutorial: How to vote for the RAVE Model Challenge",
        "description": "This tutorial is a step-by-step guide to vote for the RAVE Model Challenge hosted by the DAFNE+ platform.",
        "content": "<h1>Tutorial: How to&nbsp;vote&nbsp;for the RAVE Model Challenge ?</h1>\r\n<p>This&nbsp;tutorial is a step-by-step guide <a href=\"https://forum.ircam.fr/article/detail/rave-model-challenge-vote/\">to vote</a> for the<span>&nbsp;</span><a href=\"https://forum.ircam.fr/article/detail/rave-model-challenge/\">RAVE Model Challenge</a><span>&nbsp;</span>hosted by the<span>&nbsp;</span><a href=\"https://dafneplus.eng.it/\">DAFNE+</a><span>&nbsp;</span>platform.</p>\r\n<p>The aim of the RAVE Model Challenge is to support the authors of the best models and to collectively establish a repertoire of RAVE models, enabling everyone to benefit from the richness and variety of approaches in the field of timbre/music transfer.</p>\r\n<p>To&nbsp;consult the proposals and make your choice, please refer to this page:&nbsp;</p>\r\n<p>\"DAFNE+ provides digital content creators new forms of creation, distribution and monetization of their works of art through blockchain technology.\"</p>\r\n<p>Participation is completely free of charge, since it is based on the testnet network set up for research purposes only.</p>\r\n<p>This&nbsp;<span>tutorial</span><span>&nbsp;</span>will help users get to grips with the platform, and show them how <a href=\"https://forum.ircam.fr/article/detail/rave-model-challenge-vote/\">to&nbsp;vote for the RAVE Model Challenge.</a></p>\r\n<h2>Tutorial duration: approximately&nbsp;28 minutes.</h2>\r\n<p><iframe width=\"425\" height=\"350\" src=\"//www.youtube.com/embed/BBLPAEGDapM\"></iframe></p>\r\n<h2>Tutorial in pdf:&nbsp;<a href=\"https://forum.ircam.fr/media/uploads/RAVE%20model%20challenge/dafne%2B_workshop_%E2%80%93_dafne%2B_webinar__how_to_vote_to_the_rave_model_challenge_-_14.02.25.v1.pdf\">Slides of the presentation</a></h2>\r\n<ul>\r\n<li>\r\n<h3>Choose your favorite model</h3>\r\n</li>\r\n</ul>\r\n<blockquote>\r\n<p><a href=\"https://forum.ircam.fr/article/detail/rave-model-challenge-proposals/\">Listen to the sounds produced by the models, download them and make your choice!</a></p>\r\n</blockquote>\r\n<ul>\r\n<li>\r\n<h3>Register on the DAFNE+ platform (free)</h3>\r\nDAFNE+ Platform:&nbsp;<a href=\"https://dafneplus.eng.it/\">https://dafneplus.eng.it</a></li>\r\n</ul>\r\n<p><span>&nbsp;<iframe width=\"425\" height=\"350\" src=\"//www.youtube.com/embed/3T6SOKsRq4U\"></iframe></span></p>\r\n<ul>\r\n<li>\r\n<h3>Create a Web3 Wallet with Metamask (free)</h3>\r\n</li>\r\n</ul>\r\n<blockquote>\r\n<p><span style=\"font-weight: 400;\">Metamask is a Web3 wallet that works as a browser plugin:<span>&nbsp;</span></span><a href=\"https://metamask.io/download/\"><span style=\"font-weight: 400;\">https://metamask.io/download/</span></a></p>\r\n</blockquote>\r\n<p><iframe width=\"425\" height=\"350\" src=\"//www.youtube.com/embed/FL6eyBTPZIM\"></iframe></p>\r\n<ul>\r\n<li>\r\n<h3>Configure Amoy Testnet network and acquire POL (free)</h3>\r\n</li>\r\n</ul>\r\n<p><iframe width=\"425\" height=\"350\" src=\"//www.youtube.com/embed/Nvy4ZjSiGTw\"></iframe></p>\r\n<p><span>you need one Link token minimum on your account to request the test Amoy </span><span>(for verification</span><span>)</span><span>, otherwise the amoy faucet won</span><span>'t give it to you</span></p>\r\n<ul>\r\n<li>\r\n<h3>State your voice in the&nbsp;challenge&nbsp;(free)</h3>\r\n</li>\r\n</ul>\r\n<p><a href=\"https://dafneplus.eng.it/dao/competitions/67aa0bd3b8fadb74b12d90a2\">Go to DAO -&gt; competition -&gt; RAVE Model Challenge</a></p>\r\n<p><video width=\"490\" height=\"245\" controls=\"controls\">\r\n<source src=\"/media/uploads/RAVE model challenge/dafne+_how_to_vote_in_the_rave_model_challenge.mov\" /></video></p>\r\n<ul>\r\n<li>\r\n<h3>Et voil&agrave; !</h3>\r\n</li>\r\n</ul>\r\n<h2>Links:</h2>\r\n<ul>\r\n<li>RAVE Model Challenge:<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/rave-model-challenge/\">https://forum.ircam.fr/collections/detail/rave-model-challenge/</a><a href=\"https://forum.ircam.fr/collections/detail/rave-model-challenge/\"></a></li>\r\n<li>RAVE collection:<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/rave/\">https://forum.ircam.fr/collections/detail/rave/</a></li>\r\n<li>DAFNE+ Platform:&nbsp;<a href=\"https://dafneplus.eng.it\">https://dafneplus.eng.it</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Website:&nbsp;<a href=\"https://dafneplus.eu\">https://dafneplus.eu</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Discord:&nbsp;<a href=\"https://discord.gg/aR6VvV9Ttw\">https://discord.gg/aR6VvV9Ttw</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Survey:&nbsp;<a href=\"https://forms.gle/czcJyXhmthFkN5V48\">https://forms.gle/czcJyXhmthFkN5V48</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>YT tutorials playlist:&nbsp;<a href=\"https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ\">https://www.youtube.com/playlist?list=PLRUFYVHjMwbiSN4rt3qOXHx0czXVBrodZ</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>YT intro to Use-Case 2:&nbsp;<a href=\"https://dafneplus.eu/2024/02/interview-with-hugues-vinet-ircam-explaining-use-case-2/\">https://dafneplus.eu/2024/02/interview-with-hugues-vinet-ircam-explaining-use-case-2/</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Newsletter:&nbsp;<a href=\"https://dafneplus.eu/contact\">https://dafneplus.eu/contact</a></li>\r\n<li><span>DAFNE+<span>&nbsp;</span></span>Contact:&nbsp;<a href=\"mailto:info@dafneplus.eu\">info@dafneplus.eu</a></li>\r\n</ul>\r\n<h1><img src=\"/media/uploads/rave_model_challenge_banniere.png\" alt=\"\" width=\"2778\" height=\"676\" style=\"font-size: 14px;\" /></h1>",
        "topics": [
            {
                "id": 2375,
                "name": "challenge",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1254,
                "name": "dafne+",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2376,
                "name": "model",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1745,
                "name": "nn~",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "tutorial-how-to-vote-for-the-rave-model-challenge",
        "pk": 3252,
        "published": true,
        "publish_date": "2025-02-05T11:25:12+01:00"
    },
    {
        "title": "A System Integrating Optical Music Recognition and Real-Time Playback with SCAMP by Yi Fu Chen, Ru Fang Guo, and Jung-Ching Chen",
        "description": "This project develops a web-based system that integrates Optical Music Recognition and real-time playback using YOLOv9 and SCAMP, enabling users to upload piano scores and instantly hear accurate, automatically rendered melodies.",
        "content": "<p><span>This project presents an integrated system that combines Optical Music Recognition (OMR) with real-time playback using SCAMP. YOLOv9 is applied to both image segmentation and symbol recognition, achieving precision scores of 0.99 and 0.81, respectively. A preprocessing algorithm organizes recognition results, which are then translated into pitch and rhythm instructions for immediate playback and MP3 export via SCAMP. To enhance accessibility, a web-based interface allows users to upload piano score images and instantly hear the rendered melody. The system demonstrates strong potential for applications in music education, accessibility, and interactive learning.</span></p>",
        "topics": [
            {
                "id": 3579,
                "name": "OMR",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3580,
                "name": "SCAMP",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 134881,
            "forum_user": {
                "id": 134706,
                "user": 134881,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/73d9b9c012fdf78f651f890069bcf2a0?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-09T06:19:46.398545+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "dora",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "a-system-integrating-optical-music-recognition-and-real-time-playback-with-scamp-by-yi-fu-chen-ru-fang-guo-and-jung-ching-chen",
        "pk": 3885,
        "published": true,
        "publish_date": "2025-10-22T10:14:06+02:00"
    },
    {
        "title": "Artificial Womb - Xu Mingxi",
        "description": "This work is intended to help build the bond between babies born through ectogenesis and their families, for postpartum recovery treatment, and to stimulate and develop the newborn’s hearing. This soundscape is divided into four stages: Oosperm and Embryo, Cell Division, Becoming a Human, and Newborn.",
        "content": "<p>The artificial womb is a device that would allow for ectogenesis - gestation outside the human body, in an artificial environment built to precisely mimic the womb and carry the fetus to term. In the future, growing embryos in artificial wombs will become a commonly accepted form of fertility. Artificial wombs can make childbearing accessible to people who cannot otherwise fulfil their dream of having a biological child, such as individuals who have undergone sex reassignment surgery from, say, male to female, members of the LGBTQ+ community, and those who suffer uterine incapacity but don&rsquo;t want to consider surrogacy (using IVF).<br />&nbsp;<br />In the case of an artificial womb, there are questions as to whether human ectogenesis might further challenge what has been considered as the inviolable bond between mother and child. Solving how to maintain the bond between mother, family, and child in an artificial womb is the key point.<br />&nbsp;<br />According to current research, womb-like sounds may be a powerful tool in aiding cardiorespiratory stability, pain mitigation, and sleep promotion in infants. Recreating other womb sensory experiences is also beneficial. For instance, maternal stimulation and involvement lead to better results for infants born early with practices like kangaroo care.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 1168,
                "name": "Artificial Womb",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 524,
                "name": "Design et traitement sonores",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 815,
                "name": "soundscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32484,
            "forum_user": {
                "id": 32436,
                "user": 32484,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_7417_2.JPG",
                "avatar_url": "/media/cache/45/b2/45b269f245f926c3004b1da14f961866.jpg",
                "biography": "Mingxi is active in the field of sound art and sound design, and majoring in Information Experience Art, the Royal College of Art, UK. She is member of the Sound Professional Committee of the China Society of Motion Picture and Television Engineers(CSMPTE), the Guangdong Association of Recording Engineers(GDARE), and the Cinema Audio Society, America. She has been invited to participate in Monteaudio Sound Festival in Uruguay, IRCAM Space Audio Forum Presentation invitation. She also has won the best sound design award in the Milan Gold Award, Hollywood Gold Award, AI-Film Academy Award, and other Chinese domestic and international film festivals.",
                "date_modified": "2023-09-11T11:58:32.334956+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "xxming",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 30,
                    "user": 32484,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "artificial-womb",
        "pk": 2072,
        "published": true,
        "publish_date": "2023-02-18T14:22:36+01:00"
    },
    {
        "title": "S666no1ccom",
        "description": "S666no1ccom",
        "content": "<p>Giới thiệu chi tiết về S666 &ndash; Nền tảng giải tr&iacute; trực tuyến hiện đại<br>Trong thời đại c&ocirc;ng nghệ số, nhu cầu giải tr&iacute; trực tuyến ng&agrave;y c&agrave;ng gia tăng, k&eacute;o theo sự ph&aacute;t triển của nhiều nền tảng đa dạng. Trong đ&oacute;, S666 đang dần trở th&agrave;nh một c&aacute;i t&ecirc;n được nhiều người quan t&acirc;m nhờ v&agrave;o hệ sinh th&aacute;i phong ph&uacute; v&agrave; trải nghiệm người d&ugrave;ng được tối ưu h&oacute;a. Website S666no1 đ&oacute;ng vai tr&ograve; l&agrave; k&ecirc;nh th&ocirc;ng tin gi&uacute;p người d&ugrave;ng cập nhật nhanh ch&oacute;ng những nội dung li&ecirc;n quan đến nền tảng n&agrave;y.<br><a href=\"https://s666no1.com\">S666</a> l&agrave; g&igrave;?<br>S666 l&agrave; một nền tảng giải tr&iacute; trực tuyến cung cấp nhiều thể loại tr&ograve; chơi hấp dẫn như game quay thưởng, tr&ograve; chơi tương t&aacute;c, thể thao ảo v&agrave; nhiều h&igrave;nh thức giải tr&iacute; kh&aacute;c. C&aacute;c sản phẩm tại đ&acirc;y được thiết kế với giao diện th&acirc;n thiện, dễ sử dụng, ph&ugrave; hợp với cả người mới v&agrave; người d&ugrave;ng l&acirc;u năm.<br>Ngo&agrave;i ra, hệ thống được tối ưu để hoạt động mượt m&agrave; tr&ecirc;n nhiều thiết bị kh&aacute;c nhau, từ m&aacute;y t&iacute;nh đến điện thoại di động, gi&uacute;p người d&ugrave;ng dễ d&agrave;ng truy cập v&agrave; trải nghiệm mọi l&uacute;c mọi nơi.<br>Ưu điểm nổi bật của S666<br>Một trong những điểm thu h&uacute;t của S666 ch&iacute;nh l&agrave; sự đa dạng trong nội dung giải tr&iacute;. Người d&ugrave;ng c&oacute; thể lựa chọn nhiều tr&ograve; chơi kh&aacute;c nhau t&ugrave;y theo sở th&iacute;ch c&aacute; nh&acirc;n, từ những tr&ograve; chơi đơn giản đến c&aacute;c trải nghiệm c&oacute; t&iacute;nh tương t&aacute;c cao.<br>B&ecirc;n cạnh đ&oacute;, nền tảng thường xuy&ecirc;n triển khai c&aacute;c chương tr&igrave;nh ưu đ&atilde;i d&agrave;nh cho người tham gia, g&oacute;p phần gia tăng trải nghiệm v&agrave; sự hấp dẫn trong qu&aacute; tr&igrave;nh sử dụng.<br>Hệ thống cũng được thiết kế nhằm đảm bảo thao t&aacute;c nhanh ch&oacute;ng, tiện lợi, gi&uacute;p người d&ugrave;ng c&oacute; trải nghiệm liền mạch. Đồng thời, đội ngũ hỗ trợ hoạt động li&ecirc;n tục để giải đ&aacute;p thắc mắc khi cần thiết.<br>Giao diện v&agrave; trải nghiệm người d&ugrave;ng<br>Website S666no1 được x&acirc;y dựng với giao diện hiện đại, bố cục r&otilde; r&agrave;ng v&agrave; dễ thao t&aacute;c. Người d&ugrave;ng c&oacute; thể dễ d&agrave;ng t&igrave;m kiếm th&ocirc;ng tin, đăng k&yacute; t&agrave;i khoản v&agrave; tham gia c&aacute;c hoạt động giải tr&iacute; chỉ với v&agrave;i bước cơ bản.<br>Khả năng tương th&iacute;ch tốt tr&ecirc;n thiết bị di động l&agrave; một điểm cộng lớn, gi&uacute;p người d&ugrave;ng c&oacute; thể giải tr&iacute; linh hoạt m&agrave; kh&ocirc;ng bị giới hạn bởi kh&ocirc;ng gian hay thời gian.<br>Lưu &yacute; khi sử dụng<br>Khi tham gia bất kỳ nền tảng giải tr&iacute; trực tuyến n&agrave;o, người d&ugrave;ng cũng n&ecirc;n c&acirc;n nhắc v&agrave; sử dụng một c&aacute;ch hợp l&yacute;. Việc quản l&yacute; thời gian v&agrave; lựa chọn nội dung ph&ugrave; hợp sẽ gi&uacute;p đảm bảo trải nghiệm t&iacute;ch cực v&agrave; l&agrave;nh mạnh.<br>Ngo&agrave;i ra, người d&ugrave;ng n&ecirc;n t&igrave;m hiểu kỹ th&ocirc;ng tin trước khi tham gia, cũng như tu&acirc;n thủ c&aacute;c quy định li&ecirc;n quan tại khu vực sinh sống.<br>Kết luận<br>S666 mang đến một kh&ocirc;ng gian giải tr&iacute; trực tuyến đa dạng, ph&ugrave; hợp với nhiều đối tượng người d&ugrave;ng kh&aacute;c nhau. Với giao diện th&acirc;n thiện, nội dung phong ph&uacute; v&agrave; trải nghiệm linh hoạt, nền tảng n&agrave;y đang dần tạo được sự ch&uacute; &yacute; trong cộng đồng người d&ugrave;ng y&ecirc;u th&iacute;ch giải tr&iacute; trực tuyến. Tuy nhi&ecirc;n, việc sử dụng n&ecirc;n được c&acirc;n nhắc kỹ lưỡng để đảm bảo an to&agrave;n v&agrave; ph&ugrave; hợp với nhu cầu c&aacute; nh&acirc;n.</p>",
        "topics": [],
        "user": {
            "pk": 166489,
            "forum_user": {
                "id": 166252,
                "user": 166489,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ed65a2ca657bf6415fcc2ec2b75df394?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-04-03T10:00:08.837205+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "s666no1ccom",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "s666no1ccom",
        "pk": 4586,
        "published": false,
        "publish_date": "2026-04-03T10:02:08.217744+02:00"
    },
    {
        "title": "Concrete Motion: Embodied Interaction and Gesture-Based Technologies for Teaching and Learning Electroacoustic Music by Lorenzo Binotti",
        "description": "Concrete Motion explores electroacoustic music learning through embodied interaction, combining gesture-based technologies, sound-based practices, and enactive pedagogy to create accessible and collaborative learning environments.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p><strong><em>Short introduction</em></strong><strong><em><br /></em></strong><span>Concrete Motion is an experimental application for sound-based music (Landy, 2007), designed for educational and training contexts. The system builds on the operational flexibility of Max/MSP in combination with the Google MediaPipe body-tracking system, implemented within the TouchDesigner environment. Its aim is to create an interactive digital learning environment that mediates listening, analysis, and electroacoustic music creation through the body and movement.</span></p>\r\n<p><strong><em>The core of the project</em></strong><strong><em><br /></em></strong><span>At the core of Concrete Motion lies the relationship between music and movement, and between sonic and bodily gesture. This relationship is conceived as a central pedagogical device for making the teaching and learning processes of electroacoustic music and sound-based languages both accessible and engaging. Within this perspective, the project is grounded in the framework of Embodied Music Cognition (Leman, 2008), which understands the body as an active agent in musical understanding rather than a passive interface.</span></p>\r\n<p><span>The educational protocol developed for Concrete Motion is positioned at the intersection of the Jaques-Dalcroze approach and Smalley&rsquo;s spectromorphology. Although these frameworks originate from different historical periods and musical contexts, they converge in assigning a key role to perceptual and sensorimotor dimensions in processes of listening, analysis, learning, and musical description. The explicit integration of movement further connects these perspectives to enactive pedagogy, reinforcing an understanding of learning as embodied and situated action</span><em><span>.</span></em></p>\r\n<p><strong><em>The role of technologies</em></strong><strong><em><br /></em></strong><span>Within this framework, the interactive technologies implemented in Concrete Motion are designed to strengthen the action&ndash;perception loop that underpins the relationship between bodily gesture and sonic transformation. The learning environment is conceived as a technologically integrated ecosystem, in which students can explore the system&rsquo;s expressive possibilities freely and across multiple levels of interaction.</span></p>\r\n<p><span>Concrete Motion functions as a gesture-controlled digital environment for real-time sound manipulation and consists of two interdependent modules: a stand-alone application for sound playback and processing, and a free body-tracking plugin based on machine learning libraries. This architecture reflects a view of learning as a distributed cognitive system, in which body, technology, and space jointly contribute to the construction of meaning.</span></p>\r\n<p><span>The body-tracking component, developed by integrating MediaPipe within TouchDesigner, enables the recognition of hand movements and, more approximately, of the body in three-dimensional space. Data streams generated by bodily motion are translated into continuous and discrete control parameters acting on playback, delay processes, filtering, dynamics, and dry/wet balance. In line with an enactive perspective, the body is not treated as a mere control interface, but as a constitutive element of both the cognitive and musical processes.</span></p>\r\n<p><span>Three interaction modes are supported: control via a graphical interface, gesture-based interaction, and collaborative interaction involving two or more users. The collaborative mode, in particular, supports processes of shared exploration and sonic co-creation within classroom settings.</span></p>\r\n<p><strong><em>Methodological approach</em></strong><strong><em><br /></em></strong><span>The first phase of the research was based on a case study conducted in Italian lower secondary schools, adopting a qualitative methodology that combined video-based educational research with thematic analysis (Braun &amp; Clarke, 2006). The current phase aims to further investigate the role of movement in the understanding of the foundational structures of electroacoustic musical language, with a specific focus on the educational potential of body-tracking technologies in relation to the adopted theoretical frameworks.</span></p>\r\n<p><strong><em>Current perspectives</em></strong><strong><em><br /></em></strong><span>Future developments include the integration of Concrete Motion with IRCAM technologies based on the MuBu &ndash; Multi-Buffer system, with particular attention to the development of applications such as Live Motion and Granular Motion. This integration will also allow MuBu to be used as a tool for collecting and analysing movement data, supporting a deeper investigation of the relationship between music and the body and improving the system&rsquo;s accessibility, responsiveness, and expressive potential within a user-centred design perspective.</span></p>\r\n<p><strong><em>Research questions</em></strong><strong><em><br /></em></strong><span>The research addresses whether electroacoustic music can be considered a foundational language for innovative approaches to music education; how this teaching and learning experiences can be effectively mediated through movement and the body using interactive music systems; and which principles may support the development of new educational models grounded in sound-based musical languages.</span></p>\r\n<p><strong><em>Objectives</em></strong><strong><em><br /></em></strong><span>The project aims to experiment with innovative methodologies and tools in music education; to promote electroacoustic music as a shared cultural and educational resource; to foster critical reflection on the relationship between technology and learning from an ecological and systemic perspective; to contribute to teacher education; and to support the design of highly accessible musical technologies, including Wearable Musical Instruments and Assistive &amp; Adaptive Music Technologies.</span></p>\r\n<p><span></span></p>\r\n<p><span><img src=\"/media/uploads/call-parisenghien-lorenzo-binotti-projectpicture2.png\" alt=\"\" width=\"835\" height=\"543\" /></span></p>",
        "topics": [],
        "user": {
            "pk": 40978,
            "forum_user": {
                "id": 40924,
                "user": 40978,
                "first_name": "lorenzo",
                "last_name": "binotti",
                "avatar": "https://forum.ircam.fr/media/avatars/Lorenzo_Binotti.jpg",
                "avatar_url": "/media/cache/00/22/00226d4d86e6f91f9349d094a6626334.jpg",
                "biography": "Lorenzo Binotti is a pianist, electroacoustic musician, and researcher working in improvised music, live electronics, and sound-based practices. His work explores improvisation as real-time composition, grounded in listening, embodiment, and interaction between acoustic and digital systems.\nHe is the artistic director and conductor of the LIMS (Laboratorio di Improvvisazione e Musica Sperimentale) and the LIMS Electroacoustic Ensemble with which he works on radical improvisation, aleatory composition, contemporary music and electroacoustic research.\nHis artistic activity includes sound design for theatre, the quartet Right to Party and international residencies and performances. He's also co-founder of PVAR, a duo for doublebass and live electronics.\nAlongside his artistic practice, he is active in inclusive music education, developing experimental pedagogical protocols based on electroacoustic music, embodied listening, and interactive technologies, with experience in autism-related contexts. He is currently a PhD candidate in Learning Sciences and Digital Technologies with a research on sound-based music, interactive musical systems, and inclusive learning environments.",
                "date_modified": "2026-01-05T09:50:47.516282+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lorenzobinotti80",
            "first_name": "lorenzo",
            "last_name": "binotti",
            "bookmarks": []
        },
        "slug": "concrete-motion-embodied-interaction-and-gesture-based-technologies-for-teaching-and-learning-electroacoustic-music",
        "pk": 4139,
        "published": true,
        "publish_date": "2026-01-05T09:53:01+01:00"
    },
    {
        "title": "Can You Really Learn Palmistry Online and Build a Career with IIVS?",
        "description": "Curious about learning palmistry from home? Discover how learning palmistry online with IIVS can help you understand life patterns, develop intuitive skills, and build a meaningful career.",
        "content": "<div>\n&lt;section class=\"text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]\" dir=\"auto\" data-turn-id=\"request-69ca04cf-59a0-8320-8281-ac009a6c634e-3\" data-testid=\"conversation-turn-26\" data-scroll-anchor=\"true\" data-turn=\"assistant\"&gt;\n<div>\n<div>\n<div>\n<div>\n<div>\n<div>\n<p>Have you ever looked at your palm and wondered what those lines actually mean? Can the lines on your hand really reveal your personality, future, or life path? These are questions that have fascinated people for centuries, leading many to explore the ancient science of palmistry. But in today&rsquo;s digital world, another question arises &mdash; can you truly<a href=\"https://iivs.com/palmistry/\"><strong> learn palmistry online?</strong></a></p>\n<p>Palmistry, also known as chiromancy, is a traditional practice that studies the lines, shapes, and mounts of the hand to interpret a person&rsquo;s life. It goes beyond simple predictions and helps in understanding behavior, emotions, and potential life events. But is it possible to learn such a detailed subject without attending physical classes?</p>\n<p>This is where platforms like the <a href=\"https://iivs.com/\"><strong>Indian Institute of Vedic Science (IIVS)</strong></a> come into the picture. With structured courses and expert guidance, IIVS makes it possible to learn palmistry online in a way that is both practical and easy to understand. But what makes their approach different from random videos or free content available online?</p>\n<p>One important question to consider is &mdash; do online courses provide enough depth? At IIVS, the focus is not just on theory. Students are guided step-by-step to understand major lines like the Heart Line, Head Line, and Life Line, along with mounts and hand shapes. More importantly, they learn how to interpret these elements together rather than in isolation.</p>\n<p>But what about beginners? Can someone with no prior knowledge start learning palmistry online? The answer is yes. IIVS designs its courses in a beginner-friendly format, ensuring that even someone new to the subject can grasp concepts easily. The teaching method is simple, structured, and supported by real-life examples.</p>\n<p>Another common doubt is &mdash; can this skill actually be useful in real life? Palmistry is not just about predictions; it is about understanding human nature. It can help in improving relationships, making better decisions, and even guiding others. Many learners use this knowledge for personal growth, while others turn it into a professional skill.</p>\n<p>You might also wonder &mdash; is it possible to build a career after learning palmistry online? With the rising interest in spiritual sciences and personal guidance, there is a growing demand for palmistry experts. IIVS provides certification that adds credibility, helping you start consultations or integrate palmistry with other practices like astrology or tarot.</p>\n<p>What about the learning experience? Will it feel disconnected without a classroom? Surprisingly, online learning offers flexibility and comfort. At IIVS, students can learn at their own pace while still receiving mentorship and support. This balance makes the process both convenient and effective.</p>\n<p>Another key question is &mdash; why choose IIVS over other platforms? The institute focuses on authentic Vedic knowledge combined with practical application. With experienced mentors and a supportive community, learners gain confidence and clarity in their skills.</p>\n<p>So, can you really learn palmistry online and make it meaningful? The answer depends on where and how you learn. With the right guidance, structured training, and dedication, it is absolutely possible.</p>\n<p>In conclusion, palmistry is not just about reading hands; it is about understanding life itself. Learning palmistry online with IIVS gives you the opportunity to explore this ancient science in a modern, accessible way. The real question is &mdash; are you ready to unlock the secrets hidden in your own hands?</p>\n</div>\n</div>\n</div>\n</div>\n<div>&nbsp;</div>\n<div>\n<div>&nbsp;</div>\n</div>\n</div>\n</div>\n&lt;/section&gt;\n</div>\n<div>&nbsp;</div>",
        "topics": [],
        "user": {
            "pk": 166416,
            "forum_user": {
                "id": 166179,
                "user": 166416,
                "first_name": "Indian Institute",
                "last_name": "Vedic Science",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/287428e73a5a5733b9da69e2f34aee81?s=120&d=retro",
                "biography": "Indian Institute of Vedic Science (IIVS) is a trusted name in the field of Vedic education, dedicated to reviving and sharing the powerful knowledge of ancient Indian sciences. Our institute offers a wide range of professional and certified courses including Vedic Astrology & Numerology, Tarot Reading, Palmistry, Lal Kitab, and Akashic Records.\n\nAt IIVS, we believe that Vedic sciences are not just subjects but life-transforming tools that can guide individuals toward clarity, success, and spiritual growth. Our courses are designed for both beginners and advanced learners, combining traditional wisdom with modern teaching techniques.\n\nWe focus on practical learning, expert mentorship, and real-world application, enabling students to turn their knowledge into a rewarding career. With globally recognized certifications and a supportive learning environment, IIVS empowers individuals to become confident practitioners and healers.\n\nOur mission is to make authentic Vedic knowledge accessible to everyone and help people discover their true potential through ancient wisdom.",
                "date_modified": "2026-04-02T11:27:46.343906+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "iivs123",
            "first_name": "Indian Institute",
            "last_name": "Vedic Science",
            "bookmarks": []
        },
        "slug": "can-you-really-learn-palmistry-online-and-build-a-career-with-iivs",
        "pk": 4594,
        "published": false,
        "publish_date": "2026-04-05T07:50:17.422839+02:00"
    },
    {
        "title": "ignore - test guest article",
        "description": "test",
        "content": "<p>test</p>",
        "topics": [],
        "user": {
            "pk": 17749,
            "forum_user": {
                "id": 17744,
                "user": 17749,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e30825d6f03e737f342f5ad07300b065?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ircam-test-fusion",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ignore-test-guest-article",
        "pk": 443,
        "published": false,
        "publish_date": "2020-01-21T11:20:47+01:00"
    },
    {
        "title": "L'IA et l'interface cerveau-ordinateur pour le design sonore et le contrôle instrumental par la reconnaissance des émotions et des centres d'intérêt - Tommaso Colafiglio, Fabrizio Festa, Tommaso Di Noia",
        "description": "Notre projet est consacré au traitement des signaux d'électroencéphalogramme [recherche d'informations] pour contrôler la production de sons. En outre, notre logiciel peut contrôler les paramètres de n'importe quel instrument de musique virtuel. Nous utilisons le casque EEG Muse (une interface cerveau-ordinateur non invasive - BCI) pour reconnaître les signaux d'électroencéphalogramme.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" width=\"990\" height=\"330\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par : Fabrizio Festa, Tommaso Colafiglio et Tommaso Di Noia<br /><a href=\"https://forum.ircam.fr/profile/fabriziofesta/\">Biographie</a></p>\r\n<p>-</p>\r\n<p>En bref, voici la structure de notre logiciel bas&eacute; sur l'IA :<br />1) Une architecture de mod&egrave;le d'apprentissage profond peut g&eacute;n&eacute;rer des textures sonores.<br />2) Un syst&egrave;me de reconnaissance des &eacute;motions conditionne le mod&egrave;le d'apprentissage profond.<br />3) Un pipeline d'apprentissage automatique sp&eacute;cifique peut reconna&icirc;tre les &eacute;motions humaines.</p>\r\n<p>Ainsi, nous pouvons contr&ocirc;ler directement certains param&egrave;tres des instruments virtuels, &agrave; la fois consciemment et inconsciemment. Ce processus est possible parce que nous avons d&eacute;velopp&eacute; un mod&egrave;le d'apprentissage automatique avanc&eacute; pour reconna&icirc;tre les &eacute;motions de l'utilisateur. Gr&acirc;ce &agrave; ce processus, nous identifions l'&eacute;motion humaine pour interagir avec la synth&egrave;se sonore de n'importe quel instrument virtuel.</p>\r\n<p>-</p>\r\n<p>Nous pr&eacute;senterons deux syst&egrave;mes permettant de g&eacute;n&eacute;rer des textures sonores et de contr&ocirc;ler les sons &agrave; l'aide de deux interfaces cerveau-ordinateur et de plusieurs mod&egrave;les d'apprentissage automatique et d'apprentissage profond.</p>\r\n<p>Plus pr&eacute;cis&eacute;ment, nous concentrerons l'atelier et la d&eacute;monstration sur l'illustration d'un instrument de musique neuronal et d'un syst&egrave;me de g&eacute;n&eacute;ration de timbre conditionn&eacute; par les &eacute;motions de l'utilisateur.</p>\r\n<p><strong>Instrument musical neuronal</strong> : Gr&acirc;ce au casque BCI Muse EEG, nous pouvons extraire certaines caract&eacute;ristiques du signal &eacute;lectroenc&eacute;phalographique qui nous permettent de d&eacute;tecter l'&eacute;tat d'activation c&eacute;r&eacute;brale de l'utilisateur en temps r&eacute;el. Pour ce faire, nous avons form&eacute; un mod&egrave;le ML qui peut classifier l'&eacute;tat de concentration consciente de l'utilisateur. Ensuite, en appliquant un protocole d'analyse sp&eacute;cifique sur le signal EEG, nous pr&eacute;disons la valeur d'activation continue de l'&eacute;tat mental de l'utilisateur. En cons&eacute;quence, nous contr&ocirc;lons trois param&egrave;tres d'un instrument virtuel pour la modulation consciente du son de l'instrument de musique neuronal.</p>\r\n<p><strong>G&eacute;n&eacute;ration de textures sonores &eacute;motionnelles</strong> : Gr&acirc;ce &agrave; un ensemble de donn&eacute;es collect&eacute;es dans le laboratoire SisInfLab de l'universit&eacute; polytechnique de Bari, nous avons entra&icirc;n&eacute; un mod&egrave;le de reconnaissance des &eacute;motions avec le casque BCI Muse EEG. Ce mod&egrave;le peut d&eacute;tecter l'&eacute;motion pr&eacute;dominante de l'utilisateur en temps r&eacute;el. Une fois que nous avons obtenu la valeur de classification de l'&eacute;motion ressentie par l'utilisateur, nous utilisons cette valeur de classification pour conditionner la g&eacute;n&eacute;ration de timbres &agrave; l'aide de mod&egrave;les d'apprentissage profond. Ces mod&egrave;les ont &eacute;t&eacute; pr&eacute;-entra&icirc;n&eacute;s avec des ensembles de donn&eacute;es d'&eacute;chantillons originaux produits par l'&eacute;quipe de recherche.</p>\r\n<p>-</p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [],
        "user": {
            "pk": 62605,
            "forum_user": {
                "id": 62538,
                "user": 62605,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/ff15.jpeg",
                "avatar_url": "/media/cache/b1/9e/b19ed8c272ef1c91baf4049d3748e117.jpg",
                "biography": "Fabrizio Festa is a composer, conductor and Music and Sound Designer. He has been a researcher in applied to music computer science for many years. His work as a composer has involved him in various fields: classical (opera, ballet, symphonic, chamber music) to Jazz and applied music, from soundtracks for theatre, cinema and television to radio productions. Its pages, both symphonic and chamber music, have been performed in the United States, Canada, Central and South America (Mexico, Chile, Argentina, Brazil, Peru), in Europe (Russia, Great Britain, Holland, Germany, France, Spain, Norway, Belgium, Greece, Denmark, Sweden, Lithuania) and AsiHe is dedicating himself to research in computer science applied to music. Two sectors are mainly devoted to 1) sonic topology and computational sonology to realise specific software for different sound mapping goals and 2) artificial intelligence applied to assisted composition and performance. In this field, he has conducted research in deep learning and neural control devices (BCI).He is a member of AIMI (Italian Association of Musical Informatics), SIMC (Italian Society of Contemporary Music), Saggiatore Musicale, and Athena Musica.",
                "date_modified": "2026-02-25T17:14:46.663936+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fabriziofesta",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tommaso-colafiglio-fabrizio-festa-ai-and-brain-computer-interface-for-sound-design-generation-and-musical-instrument-control-through-emotion-and-focus-recognition",
        "pk": 2770,
        "published": true,
        "publish_date": "2024-02-27T14:35:28+01:00"
    },
    {
        "title": "New Tuning Theory/Practice",
        "description": "New tuning theory/practice",
        "content": "<p><img src=\"/media/uploads/user/ba5ea693e61bc66e86db30b5643de638.png\" alt=\"\" width=\"1308\" height=\"276\" /></p>\r\n<p>The rectilinear function y=.0833333*x.....quite simply a twelfth, forms a ordered series of 12 ratios the second cycle of which is a 'doubling' cycle ie 1.0833333-2 where applied to any frequency value will produce a series of frequencies that preserve the most cherished ratios of the harmonic series whilst being able to be populate consecutive octave series, without occurence of a comma. Green line is 12&radic;2/12*x (12TET)</p>\r\n<p><img src=\"/media/uploads/user/ab51403bc1a11975f4b869e011614d01.png\" alt=\"\" width=\"901\" height=\"285\" /></p>\r\n<p>Logic is 1/12 = .083333333 which is infact 1 of 12 so =1; 2/12 = .016666666 which is 2 of 12 so = 2 that takes care of cycle up to 12 or 12/12. We can start the 2nd cycle now from 12, 13 derived from the product of 12*13/12 = 13 and 14 is derived from product of 12*ratio 7/6, 15 from product of 12*ratio 5/4 and so on which are the ratios that are from the second period <span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">of the function y=.0833333*x. At the point of 'doubling' the cycle starts again.... so 24*13/12=26, 24*7/6=28 etc. </span>The ratio sequence is independent of its birthplace and can be applied to any frequency as a begin point and cycled continuously by the same method.</p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">Also this method can be extrapolated to operate from similarly derived functions of all similarly serialised fractions and will in the second cycle of that function always be the 'doubling' period, so can make up intervals per period of 1/2(3/2) onwards towards infinity. Those ratios will also co-exist infinitely within each other's sequences stepwise ie: 12*1.66666666(8th ratio of 2nd cycle of 1/12 function) is same as 15*1.333333333(5th ratio of 2nd cycle of 1/15 function). All sorts of data points and relationships can therefor be derived from the results of these functions.</span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">To follow this and comment please do so either here or at 12Fingers.org where fresh material will be continually added.</span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">Hear some pretty pictures</span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\"><img src=\"/media/uploads/user/.thumbnails/1484f189da359456964389ae568dd48d.png/1484f189da359456964389ae568dd48d-1399x183.png\" alt=\"\" width=\"1399\" height=\"183\" /></span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\">144 &nbsp;is the common dividend and horizontal period for all elements internalised in instances of the 1/12 system re-occurring vertically at level 13 then again 26 etc.. As an analogy 0-144 are the nodes of the first instance.</span></p>\r\n<p><span class=\"x-el x-el-span c1-8u c1-8v c1-b c1-c c1-d c1-e c1-f c1-g x-d-ux\"><img src=\"/media/uploads/user/.thumbnails/e766d7c2aeac106e323438ee02f32669.png/e766d7c2aeac106e323438ee02f32669-1341x608.png\" alt=\"\" width=\"1341\" height=\"608\" /></span></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 286,
                "name": "12tet",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 298,
                "name": "Just tuning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 284,
                "name": "Pythagorean",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 283,
                "name": "Theory",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 191,
                "name": "Tuning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17661,
            "forum_user": {
                "id": 17657,
                "user": 17661,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7356ec9886128a3b915cfe90fc832be6?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-11-18T10:39:32.702791+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "flartec",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "new-tuning-theorypractice-3",
        "pk": 449,
        "published": false,
        "publish_date": "2020-01-28T10:10:25+01:00"
    },
    {
        "title": "ASAP - Keynote & Workshop - Transforming sound in a creative way by Pierre Guillot (IRCAM)",
        "description": "IRCAM Forum Workshops 2025 Hors-Les-Murs Rīga - Liepāja (Latvia)",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p>In this presentation,&nbsp;<strong>Pierre Guillot</strong>&nbsp;will explore the historical, artistic, and research context that shaped the development of the&nbsp;<strong>ASAP plug-ins</strong>, emphasizing the project&rsquo;s innovative solutions and technical challenges. He will introduce the functionalities of the&nbsp;<strong>ASAP collection</strong>, with a focus on plug-ins leveraging&nbsp;<strong>ARA2 technology</strong>.</p>\r\n<p>Among these, the&nbsp;<strong>Psycho Filter</strong>&nbsp;plug-in allows users to draw custom spectral filters directly on a sound&rsquo;s spectrogram, adjusting gain and fade for precise control. Its intuitive interface enables the creation of intricate spectral modifications&mdash;whether to attenuate unwanted artifacts, enhance specific frequency components, or creatively transform the sound.</p>\r\n<p>Meanwhile, the&nbsp;<strong>Pitches Brew</strong>&nbsp;plug-in offers advanced pitch and formant manipulation through interactive frequency curve editing. Beyond its high-quality processing, the tool provides a visual representation of fundamental frequencies, target pitches, and formants, allowing for dynamic adjustments such as redrawing, transposing, stretching, and copying curves.</p>\r\n<p>The&nbsp;<strong>Stretch Life</strong>&nbsp;plug-in introduces a unique approach to time manipulation, enabling users to stretch and compress sound dynamically for imaginative and creative sound design.</p>\r\n<p>The talk will also address the integration of&nbsp;<strong>neural network models</strong>&nbsp;in audio applications, particularly through the&nbsp;<strong>TensorFlow framework</strong>, opening a discussion on the future of AI-driven sound processing.</p>\r\n<p>This presentation will be followed by a workshop where participants will be invited to use the tools in concrete examples to discover the possibilities offered by these technologies.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/6ac44a7bce21a001fee1902587d55e77.png\" /></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 925,
                "name": "ASAP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18039,
            "forum_user": {
                "id": 18033,
                "user": 18039,
                "first_name": "Pierre",
                "last_name": "Guillot",
                "avatar": "https://forum.ircam.fr/media/avatars/5917_2.png",
                "avatar_url": "/media/cache/8d/bf/8dbf67f8a9bbda6883dc3ca00132cee3.jpg",
                "biography": "Pierre Guillot holds a Ph.D. in Aesthetics, Science, and Technology of the Arts, with a specialization in Music. He completed his doctoral studies at the University of Paris 8 in 2017 as part of the Laboratoire d'Excellence Arts-H2H program.\n\nThroughout his research career, Guillot has contributed to the development of innovative music technologies, including the HOA ambisonics sound spatialization library, the collaborative patching software Kiwi, and Camomile, a versatile multi-format, multi-platform plugin.\n\nSince 2018, he has been working at IRCAM as part of the Innovation and Research Means department, where he leads key projects such as Partiels, ASAP, and TS2, thereby advancing music technology and digital sound innovation.",
                "date_modified": "2026-02-17T16:42:12.990239+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 346,
                        "forum_user": 18033,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-02",
                        "type": 0,
                        "keys": [
                            {
                                "id": 10,
                                "membership": 346
                            },
                            {
                                "id": 15,
                                "membership": 346
                            },
                            {
                                "id": 20,
                                "membership": 346
                            },
                            {
                                "id": 22,
                                "membership": 346
                            },
                            {
                                "id": 31,
                                "membership": 346
                            },
                            {
                                "id": 51,
                                "membership": 346
                            },
                            {
                                "id": 102,
                                "membership": 346
                            },
                            {
                                "id": 116,
                                "membership": 346
                            },
                            {
                                "id": 121,
                                "membership": 346
                            },
                            {
                                "id": 132,
                                "membership": 346
                            },
                            {
                                "id": 140,
                                "membership": 346
                            },
                            {
                                "id": 153,
                                "membership": 346
                            },
                            {
                                "id": 203,
                                "membership": 346
                            },
                            {
                                "id": 211,
                                "membership": 346
                            },
                            {
                                "id": 236,
                                "membership": 346
                            },
                            {
                                "id": 224,
                                "membership": 346
                            },
                            {
                                "id": 278,
                                "membership": 346
                            },
                            {
                                "id": 359,
                                "membership": 346
                            },
                            {
                                "id": 386,
                                "membership": 346
                            },
                            {
                                "id": 392,
                                "membership": 346
                            },
                            {
                                "id": 598,
                                "membership": 346
                            },
                            {
                                "id": 680,
                                "membership": 346
                            },
                            {
                                "id": 705,
                                "membership": 346
                            },
                            {
                                "id": 737,
                                "membership": 346
                            },
                            {
                                "id": 750,
                                "membership": 346
                            },
                            {
                                "id": 776,
                                "membership": 346
                            },
                            {
                                "id": 798,
                                "membership": 346
                            },
                            {
                                "id": 838,
                                "membership": 346
                            },
                            {
                                "id": 860,
                                "membership": 346
                            },
                            {
                                "id": 901,
                                "membership": 346
                            },
                            {
                                "id": 922,
                                "membership": 346
                            },
                            {
                                "id": 942,
                                "membership": 346
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "guillot",
            "first_name": "Pierre",
            "last_name": "Guillot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18039,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 678,
                    "user": 18039,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "asap-keynote-workshop-transforming-sound-in-a-creative-way",
        "pk": 3575,
        "published": true,
        "publish_date": "2025-07-22T12:19:46+02:00"
    },
    {
        "title": "Re:Space - Mariam Gviniashvili",
        "description": "Re:Space is a spatial audio project developed during my residency with the Spatial Audio Network Europe. The project grows out of my desire to carry the detailed spatial work of the studio into a live setting, without losing flexibility or immediacy. Designed to adapt to different technical conditions, it has been presented at Effenaar, Hellerau, and ZIMMT.",
        "content": "<div>\n<p>&nbsp;</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/89197f1340ccfd7679cd47e30a2717ec.jpg\"></p>\n<p><strong>Re:Space</strong> is a spatial audio project that brings together fixed media and live sound performance.</p>\n<p>I began working on it during my residency with the Spatial Audio Network Europe. The first version took shape at the 4DSOUND studio in Amsterdam, where I worked with their 4DSOUND engine and system, and later at NOTAM studio 3. From the start, I wanted to compose a piece that could move from one space to another without losing its depth - something flexible enough to adapt to different speaker systems, even when setup time is short.</p>\n<p>Re:Space has been performed at Effenaar, Hellerau, and ZIMMT, each time with a different speaker configuration and acoustic environment. Across these three setups, the piece held its shape and depth, showing that the method works equally well in very different technical conditions.</p>\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/0efb1307b9e5055604d921af33d96ff5.jpg\"></p>\n<p>The project grows out of a tension in my own practice. In the studio, I shape sound and space together, slowly, letting them influence each other. It&rsquo;s a detailed process that&rsquo;s hard to recreate on stage. In my live sets, on the other hand, I&rsquo;ve often worked with broader gestures and semi-automated systems, focusing on control and flow. With Re:Space, I&rsquo;m trying to bring these two worlds closer. I&rsquo;ve been working with tools like 4DSOUND, the IEM plug-in suite, Sparta and Grapes to carry the fine detail of my studio pieces into a live setting, without losing the sense of risk and immediacy that performance brings.<br>This project is the first step in that direction. It opens a new line of work for me, where composed sound and live space meet more closely than before.</p>\n</div>\n<div>&nbsp;</div>",
        "topics": [
            {
                "id": 4268,
                "name": "live performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 13598,
            "forum_user": {
                "id": 13595,
                "user": 13598,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Mariam_G.jpg",
                "avatar_url": "/media/cache/2e/ca/2ecab49135bf85882cec09a714c8a235.jpg",
                "biography": "Mariam Gviniashvili is a composer and sound artist working at the intersection of electroacoustic music, 3D sound, and multimedia performance. Her work explores the physical and emotional dimensions of sound and space, often integrating visuals and live performance.\n\nHer work has been presented at festivals, venues, and radio broadcasts worldwide, including INA GRM, ZKM | Center for Art and Media, Ars Electronica, EMPAC, New York Electroacoustic Music Festival, BEAST FEaST, Virginia Tech, Transitions at CCRMA, MA/IN, ICMC, Mixtur Festival, Klingt Gut, In Situ Festival, Heroines of Sound, Ultima Festival, and BBC Radio.\n\nMariam is the recipient of two Honorary Mentions from Prix Ars Electronica (2021, 2023), the PRIX CIME, and the Work of the Year Award (NKF)",
                "date_modified": "2026-03-02T19:05:17.792678+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "MariamG",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 4385,
                    "user": 13598,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "respace-mariam-gviniashvili",
        "pk": 4385,
        "published": true,
        "publish_date": "2026-02-18T16:30:30+01:00"
    },
    {
        "title": "Shadows (2015 Ircam Artistic Research Residency)",
        "description": "Shadows is a solo piano piece that incorporates dynamically-generated music notation, display, and score following. It was created in close collaboration with researchers at IRCAM and GRAME as part of a 2015 IRCAM artistic research residency.",
        "content": "<p>In Shadows, the pianist reads an open-form score from a laptop screen, choosing his own path through a series of connected musical fragments. At the same time, the laptop listens to the pianist, tracks the decisions he makes about what to play, and constantly updates the score in response. This dialogue between pianist and computer, actuated through a dynamic score, serves to amplify the expressive decisions made by the pianist, to subtly push him in new musical directions, and to create large-scale structural arcs in the music.</p>\r\n<p>Shadows consists of four movements, each of which explores the pianist-computer-score interaction from a different perspective:</p>\r\n<p>I. Traces. The score consists of twelve chords followed by their echoes. The speed at which the pianist moves from chord to chord affects how much of the score is displayed and how much is hidden.</p>\r\n<p>II. Chorale. The pianist plays from a selection of five chords and three embellishment notes. Each time a chord or note is played, its harmonic density and complexity is changed.</p>\r\n<p>III. Perpetual Quiet. The pianist builds arpeggios from a constantly changing set of pitches.</p>\r\n<p>IV. Perpetual Melody. The pianist chooses from a combination of rhythmically driven, short melodic motives and chords. Connections between fragments are added and removed based on the amount each fragment is being played.</p>\r\n<p>I wrote Shadows for pianist Melvin Chen, during an artistic research residency at IRCAM. I worked closely with researchers on the Music Representations Team to extend the functionality of Antescofo to better support dynamically generated scores and open-form scores. I also worked with researchers at GRAME to utilize InScore for the dynamic display of notation to the pianist, working to add features and address limitations of that platform as needed for this performance context.</p>\r\n<p>More information about the project, including a score and a video of a performance, are available at <a href=\"http://distributedmusic.gatech.edu/jason/music/shadows-2015/\">http://distributedmusic.gatech.edu/jason/music/shadows-2015/</a>.</p>\r\n<p>Many thanks to Arshia Cont and Jean-Louis Giavitto from IRCAM and to Dominique Fober from GRAME for collaborating with me to extend their Antescofo and INScore software, respectively, for use in this piece.</p>",
        "topics": [],
        "user": {
            "pk": 3452,
            "forum_user": {
                "id": 3450,
                "user": 3452,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/73b83249dc8d848a984dd2286e191edc?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "xfreeman",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "shadows-2015-ircam-artistic-research-residency",
        "pk": 382,
        "published": true,
        "publish_date": "2019-12-19T16:25:41+01:00"
    },
    {
        "title": "Ashes to Ashes: Decomposition as a composition method by Louisa Palmi.",
        "description": "IRCAM forum - Latvia september 2025 - Louisa Palmi",
        "content": "<div>\r\n<div>\r\n<div>\r\n<p>This project explores sonification as a compositional tool through two contrasting versions of the piece Ashes to Ashes and the piece Dust to Dust, centered on the theme of death and bodily decomposition. The first version employs an abstract, interpretative approach to sonification, translating schematic information about decomposition into artistic choices. The second version uses algorithmic composition driven by datasets, where data modulates both the sound material and its spatialization. The comparison reveals key differences in compositional approaches. The project also highlights that strict adherence to data does not necessarily produce results that are immediately perceptible or intuitive to the listener. The third piece is the result of implementing the knowldege aquired from the first two experiments in a less strict way.</p>\r\n<p></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 61344,
            "forum_user": {
                "id": 61278,
                "user": 61344,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/67bdfc5a16759784de62650b96366178?s=120&d=retro",
                "biography": "I am a composer and artist specializing in electroacoustic music and immersive multichannel sound, with a strong interest in interdisciplinary collaboration. I hold a bachelor’s degree in music from the Academy of Music and Drama in Gothenburg, where I studied under Natasha Barrett, and a master’s degree in electroacoustic composition from the Royal College of Music in Stockholm. My background also includes studies in mathematics and medicine, which I integrate into my artistic practice.\n\nAs a composer and musician, I work with a variety of multichannel formats and across different types of collaborations. My work has been presented internationally, including at IRCAM and the MA/IN Festival. I believe that by combining artistic methods with scientific research and other cross-disciplinary practices, we can uncover insights or experiences that neither field could achieve alone.\nThis approach shaped my master’s thesis, Ashes to Ashes, which explored the decomposition of the human body through a fusion of scientific research and artistic interpretation. Using sonification techniques, I translated data on bodily decay into an immersive sound composition. For me, there is something powe",
                "date_modified": "2025-09-17T09:36:17.569818+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "louisapalmi1",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ashes-to-ashes-decomposition-as-a-composition-method",
        "pk": 3610,
        "published": true,
        "publish_date": "2025-08-11T09:50:50+02:00"
    },
    {
        "title": "Acoustic Objects: Creating and Distributing Personal Immersive Audio Experiences by Jean-Marc Jot",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><img src=\"/media/uploads/unnamed.png\" alt=\"\" max-width=\"1024\" max-height=\"1024\" /><br /><span style=\"font-weight: 400;\"></span></p>\r\n<p>Presented by Jean-Marc Jot</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/jmjot/\" target=\"_blank\">Biography</a></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\">What if we could stream immersive virtual events in which audio objects coincide spatially with displayed visuals, or music and soundtracks amenable to artifact-free instrument or language substitution, or to spectator 6-DoF navigation?&nbsp; We consider the evolution of object-based immersive audio technology toward the unification of cinematic or broadcast content and embodied experience ecosystems.&nbsp; We introduce the notion of </span><i><span style=\"font-weight: 400;\">Acoustic Objects</span></i><span style=\"font-weight: 400;\">, providing a universal spatial audio encoding and transmission format extension for the creation and distribution of personalizable and navigable music, multimedia, and virtual or augmented reality sound, in entertainment and business applications.</span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>\r\n<p><span style=\"font-weight: 400;\"></span></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20758,
            "forum_user": {
                "id": 20749,
                "user": 20758,
                "first_name": "Jean-Marc",
                "last_name": "Jot",
                "avatar": "https://forum.ircam.fr/media/avatars/jmj_2023b_whitebg.png",
                "avatar_url": "/media/cache/43/5c/435c8591db0f56f21cc34332821b283a.jpg",
                "biography": "Globally recognized audio technology innovator in consumer electronics and pro markets, currently focusing more particularly on immersive audio, hearing personalization and music technology innovation.  I founded Virtuel Works to help accelerate the development and deployment of audio, voice and music computing technologies that will power immersive experiences.  Previously, I initiated and drove the development of novel sound processing technologies, platforms and standards for virtual and augmented reality, gaming, broadcast, cinema, and music creation - with Magic Leap, Creative Labs, DTS / Xperi, and iZotope / Native Instruments.  Before relocating to California in the late 90s, I conducted research at IRCAM in Paris, where I created the Spat software library for immersive music creation and performance.  Fellow of the Audio Engineering Society, regular speaker in industry and academic events.  Authored numerous publications and patents on digital audio signal processing.",
                "date_modified": "2025-04-16T18:29:13.648099+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jmjot",
            "first_name": "Jean-Marc",
            "last_name": "Jot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3392,
                    "user": 20758,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "acoustic-objects-creating-and-distributing-personal-immersive-audio-experiences",
        "pk": 3305,
        "published": true,
        "publish_date": "2025-02-24T13:29:22+01:00"
    },
    {
        "title": "Brain Computer Interface and Sound Multi-Layer Perceptron BCI – SMLP by Tommaso Di Noia, Tommaso Colafiglio, Fabrizio Festa",
        "description": "The present work introduces SMLP – Sound Multi-Layer Perceptron, a system that employs an artificial neural network as both a musical instrument and a cognitive–computational exploration environment. The architecture consists of: (i) a Problem Generator that constructs synthetic, parameterizable, and controllable datasets; and (ii) a Learning Engine developed from scratch (MLP) with variable depth and width, operating under a real-time supervised learning paradigm. The system integrates a 14-channel Emotiv Brain–Computer Interface (BCI) to acquire the user’s raw EEG signal.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\n&lt;table&gt;\n&lt;tbody&gt;\n&lt;tr&gt;\n&lt;td&gt;<img alt=\"\" src=\"/media/uploads/di_noia,_colafiglio,_festa.jpg\">&lt;/td&gt;\n&lt;/tr&gt;\n&lt;/tbody&gt;\n&lt;/table&gt;\n<p>The present work introduces <strong>SMLP &ndash; Sound Multi-Layer Perceptron</strong>, a system that employs an artificial neural network as both a musical instrument and a cognitive&ndash;computational exploration environment. The architecture consists of: (i) a <em>Problem Generator</em> that constructs synthetic, parameterisable, and controllable datasets; and (ii) a <em>Learning Engine</em> developed from scratch (MLP) with variable depth and width, operating under a real-time supervised learning paradigm. The system integrates a 14-channel Emotiv Brain&ndash;Computer Interface (BCI) to acquire the user's raw EEG signal. The signals are preprocessed and analysed using our pretrained machine learning models, which can estimate, in real time, several mental states: focus, stress, engagement, arousal, valence, and frontal asymmetry. These neurophysiological indices are not used solely as visualisation or external control parameters; rather, they directly influence the deep neural network's learning dynamics. Specifically, mental states modulate the backpropagation process as an adaptive optimiser interacting with weight updates. EEG-derived metrics directly influence regularisation coefficients, thereby configuring a human-in-the-loop learning paradigm in which the user's cognitive condition becomes an integral component of the optimisation function. Training thus assumes a neuroadaptive dimension, whereby the network learns as a function of the mental state detected in real time. A multimodal GUI renders the model's internal state both visually and audibly during training. Weights and biases are mapped onto the parameters of a multi-oscillator additive sound synthesis system, transforming optimisation dynamics into acoustic material. The system further incorporates a real-time feedback loop among the interface, the neural network, and the user's neurophysiological state, thereby generating a closed ecosystem in which machine-learning processes and brain activity co-evolve. SMLP therefore proposes an operational framework that integrates technical analysis, auditory perception, and neurocognitive regulation. It offers a tool for research through auditory and physiological monitoring of the training steps as well as for artistic practice, introducing a form of adaptive neural composition guided by the performer's mental state. The system architecture illustrates the platform's overall framework, delineating its constituent components and their interactions across distinct processing stages.&nbsp;</p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4284,
                "name": "ANN",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1846,
                "name": "BCI",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4285,
                "name": "Computational Sonology",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3463,
                "name": "EEG",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2578,
                "name": "Musical Composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 62605,
            "forum_user": {
                "id": 62538,
                "user": 62605,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/ff15.jpeg",
                "avatar_url": "/media/cache/b1/9e/b19ed8c272ef1c91baf4049d3748e117.jpg",
                "biography": "Fabrizio Festa is a composer, conductor and Music and Sound Designer. He has been a researcher in applied to music computer science for many years. His work as a composer has involved him in various fields: classical (opera, ballet, symphonic, chamber music) to Jazz and applied music, from soundtracks for theatre, cinema and television to radio productions. Its pages, both symphonic and chamber music, have been performed in the United States, Canada, Central and South America (Mexico, Chile, Argentina, Brazil, Peru), in Europe (Russia, Great Britain, Holland, Germany, France, Spain, Norway, Belgium, Greece, Denmark, Sweden, Lithuania) and AsiHe is dedicating himself to research in computer science applied to music. Two sectors are mainly devoted to 1) sonic topology and computational sonology to realise specific software for different sound mapping goals and 2) artificial intelligence applied to assisted composition and performance. In this field, he has conducted research in deep learning and neural control devices (BCI).He is a member of AIMI (Italian Association of Musical Informatics), SIMC (Italian Society of Contemporary Music), Saggiatore Musicale, and Athena Musica.",
                "date_modified": "2026-02-25T17:14:46.663936+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fabriziofesta",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tommaso-di-noia-tommaso-colafiglio-fabrizio-festa-brain-computer-interface-and-sound-multi-layer-perceptron-bci-smlp",
        "pk": 4398,
        "published": true,
        "publish_date": "2026-02-19T12:51:02+01:00"
    },
    {
        "title": "\"Low-Latency Inference of Optimized AI-DSP Models for Hard Realtime Deadlines\" by Christopher Johann Clarke (Singapore)",
        "description": "Audio processing is a hard realtime process, whether or not we'd like to admit it. In times where a lot of focused is being placed on achieving high computing power with greater reliance on GPU processing, it is easy to forget that realtime audio faces different challenges. I present some strategies for mitigating the need for more compute, and how to achieve realtime safe inference.",
        "content": "<p></p>\r\n<p><span>Digital audio processing is a hard real-time task. Each processing cycle must finish within a strict deadline set by the buffer size and sampling rate. If the deadline is missed, the result is an audible discontinuity. Unlike general-purpose computing, there is no allowance for variable execution time or occasional spikes in latency. </span><span>Most current machine learning systems are designed for high throughput. They often rely on GPUs and parallel scheduling. These methods are effective for batch processing but do not address the timing requirements of real-time audio. In real-time contexts the worst case matters more than the average case. Operations that are acceptable in offline inference, such as asynchronous scheduling or dynamic memory allocation, cannot be tolerated inside a real-time audio process. </span><span>This article describes strategies for running AI&ndash;DSP models under these conditions. It is divided into three parts: adjusting expectations to match fixed real-time limits, optimizing neural network structures to reduce computation and memory use, and implementing code that ensures bounded execution to support inference under the sub-millisecond deadlines common in current audio systems. <strong>I provide generic constructions of the arguments that I will be focusing on in the talk in this article.</strong><br /><br />Here is a plot of a single input inference, comparing the times for different models across different tasks</span></p>\r\n<div><span><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2376e28ddde7c727db865a00d3356998.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></div>\r\n<div style=\"text-align: center;\"><span>Generic Diffusion https://github.com/apapiu/transformer_latent_diffusion &nbsp;<br />VGG16 https://keras.io/api/applications &nbsp;<br />Audio @ 48 kHz (1/48000 * 1000 = 0.0208 ms)<br /></span></div>\r\n<h2 style=\"text-align: left;\"><span style=\"text-decoration: underline;\"><strong>Optimizing our Expectations</strong></span></h2>\r\n<p><span>As alluded to earlier real-time audio imposes fixed deadlines. Each processing block must be completed before the next buffer is required. This defines the maximum allowable computation per cycle. Any process that exceeds this bound produces failure in the audio stream. </span><span>Neural models must be sized according to these limits. Large networks that operate in other domains cannot be used directly. Practical deployment requires reducing parameters, or some other method for reducing the amount of computation to be done. This has been shown by certain grey-box models to be effective. </span><span>Research will continue to improve efficiency and enable larger models under the same deadlines, but when designing a system today the specification must reflect current hardware and software conditions. Planning should account for incremental improvements in the near term, but assuming hardware or framework advances years into the future is not viable for building reliable systems. </span><span>The expectation is bounded execution time with minimal variance, defined by what is feasible at present and in the immediate future. All further stages of optimization assume this constraint as the baseline condition. To further use a line from Bencina (from &ldquo;time waits for nothing&rdquo;), we should consider an algorithm&rsquo;s worst-case compute time instead of considering its averaged or amortised compute time.<br /><br /></span></p>\r\n<h2><span style=\"text-decoration: underline;\">Optimizing the Neural Network Architecture</span></h2>\r\n<p>Once the external constraints are established, the network itself must be specified with respect to those constraints. Network design is often approached heuristically, through trial, error, or large-scale search. While this may eventually yield a working solution, it consumes resources and does not ensure suitability for real-time use. If we draw inspiration from Algorithmic Alignment, we can see that there should always exist some network for the function we want to achieve:</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/98f77fd6f1c8d49c3e921a58ebfee0b4.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" />As an example, we could run a grid search across many layers to find the ideal model size, but this takes time...</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/7c5dac9a45a9435281e94bfdea670cb8.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>A more practical strategy is to define limits on model size in advance. This can be done by drawing on prior results, experimental evidence, or theoretical bounds, rather than relying solely on intuition. In practice, this may mean restricting the range of architectures to be tested, rather than attempting unconstrained grid searches across depth and width. For example, simple experiments (<a href=\"https://www.youtube.com/watch?v=lZxfv0euB98\">presented here at ADCx</a>) demonstrate that searching across thousands of possible configurations can take days, while a bounded search guided by prior knowledge converges within the same order of accuracy in far less time . The aim is not to discover an optimum across the full space, but to establish a workable boundary within which models can be evaluated efficiently. <strong>I will present more of these optimizations at the talk.</strong> But by treating the network as a component subject to constraints, the design process becomes more predictable. The focus is no longer on achieving maximum performance without regard to cost, but on achieving sufficient performance while remaining within time and resource budgets. This approach reduces wasted computation, simplifies evaluation, and increases the likelihood that models trained in development can be transferred directly to deployment under real-time deadlines.<br /><br /></p>\r\n<h2><span style=\"text-decoration: underline;\">Optimizing the Code</span></h2>\r\n<p>General inference libraries are designed to maximize throughput or exploit large batch sizes. These choices are appropriate for offline or high-volume workloads but do not match the requirements of hard real-time audio, where latency per sample or per buffer is the primary constraint. <strong>RTNeural </strong>(<a href=\"https://github.com/jatinchowdhury18/RTNeural\">github</a>) was developed specifically with this use-case in mind. It is a lightweight C++ inference library intended for audio plugins and other systems with strict deadlines. Unlike larger frameworks, it does not assume batching, memory allocation during execution, or hidden scheduling.</p>\r\n<p>Models can be built at compile time, embedding the architecture in the type system, or at run time using pre-exported weights.</p>\r\n<blockquote><code>// example of model defined at run-time</code><br /><code>std::unique_ptr&lt;RTNeural::Model&lt;float&gt;&gt; neuralNet[2];</code><br /><br /><code>// example of model defined at compile-time</code><br /><code>RTNeural::ModelT&lt;float, 1, 1,</code><br /><code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;RTNeural::DenseT&lt;float, 1, 8&gt;,</code><br /><code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;RTNeural::TanhActivationT&lt;float, 8&gt;,</code><br /><code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;RTNeural::Conv1DT&lt;float, 8, 4, 3, 2&gt;,</code><br /><code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;RTNeural::TanhActivationT&lt;float, 4&gt;,</code><br /><code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;RTNeural::GRULayerT&lt;float, 4, 8&gt;,</code><br /><code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;RTNeural::DenseT&lt;float, 8, 1&gt;</code><br /><code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&gt; neuralNetT[2];</code></blockquote>\r\n<p>Both methods expose a simple per-sample forward() call that can be used directly inside the audio callback without additional overhead.</p>\r\n<blockquote><code>// in the processBlock()</code><br /><code>for (int ch = 0; ch &lt; buffer.getNumChannels(); ++ch) {</code><br /><code>&nbsp; auto* x = buffer.getWritePointer (ch);</code><br /><code>&nbsp; for (int n = 0; n &lt; buffer.getNumSamples(); ++n) {</code><br /><code>&nbsp; &nbsp; &nbsp; float input[] = { x[n] };</code><br /><code>&nbsp; &nbsp; &nbsp; x[n] = neuralNetT[ch].forward (input);</code><br /><code>&nbsp; &nbsp;}</code><br /><code>}</code></blockquote>\r\n<p>In practice, this means that initialization, weight loading, and memory allocation are done once, outside the callback. Execution inside the callback is reduced to deterministic state updates, with no blocking operations. Benchmarks included with the project show that RTNeural maintains real-time feasibility where general-purpose runtimes do not. For audio, this property is more important than absolute throughput, making RTNeural suitable as a code-level optimization for deploying learned models under hard deadlines.</p>\r\n<h2><span style=\"text-decoration: underline;\">Conclusion</span></h2>\r\n<p><span>Hard real-time audio requires bounded execution in every cycle. This work has outlined constraints on expectations, network design, and code implementation, showing how models can be made feasible under strict deadlines. The priority is not throughput but determinism; because in real-time audio, missing the deadline is not a slowdown, it is failure.</span></p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 3504,
                "name": "audio dsp",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3505,
                "name": "low latency",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1743,
                "name": "neural network",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1855,
                "name": "realtime",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 11364,
            "forum_user": {
                "id": 11361,
                "user": 11364,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/PHOTO-2025-06-24-04-02-06.jpg",
                "avatar_url": "/media/cache/ec/91/ec916abf18d5af9c266db74036b69170.jpg",
                "biography": "Christopher Johann Clarke, PhD in Artificial Intelligence and Machine learning with a specialisation in audio design and digital signal processing (DSP). My passion lies with low-latency audio plugin/framework implementations, particularly for applications that have traditionally been deemed otherwise. My PhD thesis focuses on the utilisation of AI/ML technologies to run extremely low-latency audio processing, even on low-compute devices such as embedded microcontrollers or System-on-a-Chip. As a Music Technologist with focus on generative algorithms and stochastic modelling for music generation, I have presented fixed site-specific installations and deployed software libraries on music generation.",
                "date_modified": "2025-10-15T04:08:00.035364+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "chrisclarke",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "low-latency-inference-of-optimized-ai-dsp-models-for-hard-realtime-deadlines-by-christopher-johann-clarke-singapore",
        "pk": 3774,
        "published": true,
        "publish_date": "2025-10-06T12:09:19+02:00"
    },
    {
        "title": "Truthscape - Jieyu Huang, Jinhan Lu, Yiding Ma, Shuyi Guo, Bei Su",
        "description": "Limehouse, qui était autrefois une plaque tournante du commerce, a ensuite fusionné avec le quartier chinois de SOHO. Ce projet se penche sur son histoire, soulignant comment l'information façonne la perception et remet en question les vérités figées.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;<span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Jieyu Huang, Jinhan Lu, Yiding Ma, Shuyi Guo, Bei Su<br /><a href=\"https://forum.ircam.fr/profile/jieyu47/\">Biographie Jieyu Huang, Yiding Ma<br /></a><a href=\"https://forum.ircam.fr/profile/shuyi/\">Biographie Shuyi Guo</a></span></p>\r\n<p><a href=\"https://forum.ircam.fr/profile/jieyu47/\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\"></span></a></p>\r\n<p>Le quartier de Limehouse, qui faisait autrefois partie des docks historiques de Londres, &eacute;tait une route commerciale cruciale reliant le Royaume-Uni et l'Extr&ecirc;me-Orient avant que SOHO ne devienne le c&eacute;l&egrave;bre quartier chinois. Les premiers colons, notamment des marins du Guangdong et des immigrants chinois de Shanghai, ont jet&eacute; les bases de ce qui allait devenir le \"Chinatown\" de Londres. Le quartier chinois de Limehouse a une histoire riche et complexe. Ce projet d&eacute;crit et comprend ce lieu &agrave; travers divers documents historiques et reportages, en essayant d'explorer le r&ocirc;le de l'information dans la formation de l'impression du quartier.</p>\r\n<p>Nous nous concentrons principalement sur l'inconnaissabilit&eacute; de la v&eacute;rit&eacute; qui se cache derri&egrave;re le quartier de Limehouse. La v&eacute;rit&eacute; n'est pas une entit&eacute; absolue et fixe, mais un poly&egrave;dre qui refl&egrave;te les perspectives, les pr&eacute;jug&eacute;s et les exp&eacute;riences de ceux qui y participent. Le projet esp&egrave;re montrer que la v&eacute;rit&eacute;, comme l'histoire, n'est pas un r&eacute;cit unique, mais une entit&eacute; dynamique et &eacute;volutive, fa&ccedil;onn&eacute;e par l'interaction des perspectives culturelles, sociales et personnelles.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55024,
            "forum_user": {
                "id": 54962,
                "user": 55024,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9cb891d757bbbc93d821a869b373975f?s=120&d=retro",
                "biography": "Jieyu Huang, studying in digital direction at the Royal College of Art and is passionate about using cutting-edge technology and innovative digital media as a means of artistic communication. She specialises in spatial design, interactive installations and other art and design related fields. The themes of her works often focus on feminism, delving into complex issues such as gender, identity and empowerment.\n\nYiding Ma, a Royal College of Art student with a background in architecture, is passionate about weaving sensations and emotions into materials through compelling storytelling. She aims to become a space director, crafting immersive, emotionally resonant experiences that transcend conventional boundaries and transport audiences into engaging narratives enriched with sensory elements.\n\nShuyi Guo is a digital direction student at the Royal College of Art with a background in art and technology. As a visual communicator, she focuses on the relationship between social group behaviour, psychology, alienation phenomena, and their driving factors. Her practice utilizes narrative imagery, installations, and performance art to convey messages to the audience, creating dramatic works o",
                "date_modified": "2024-03-17T23:21:40.376451+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jieyu47",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "truthscape",
        "pk": 2799,
        "published": true,
        "publish_date": "2024-03-05T00:07:39+01:00"
    },
    {
        "title": "Emotion-Driven audio and multimédia generation using brain-computer interfaces and deep learning - Fabrizio Festa, Tommaso Colafiglio, Tommaso Di Noia",
        "description": "This research explores a system that integrates Brain-Computer Interfaces (BCIs) with our proprietary advanced machine learning and deep learning models. The system generates real-time images and audio textures based on the emotional and cognitive states of two users within a biofeedback protocol. By employing AI models trained to classify emotional polarity and mental states such as Focus, Relaxation, Stress, and Workload, this system provides a comprehensive understanding of user cognition and emotion.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p style=\"text-align: center;\"><img src=\"https://forum.ircam.fr/media/uploads/fabrizio_festa.jpeg\" alt=\"\" width=\"457\" height=\"343\" /><span>&nbsp;</span><img src=\"https://forum.ircam.fr/media/uploads/tomm-colafiglio-1-360x270.jpg\" alt=\"\" width=\"459\" height=\"344\" /><span>&nbsp;</span><img src=\"https://forum.ircam.fr/media/uploads/t_dinoia-506699224-315x270.png\" alt=\"\" width=\"403\" height=\"345\" /></p>\r\n<p style=\"text-align: left;\"></p>\r\n<p style=\"text-align: left;\"><span>Presented by :</span><span>&nbsp; PHD Candidate&nbsp;</span><span>Dr Tommaso Colafiglio,</span><span>&nbsp;</span><span>Professor Tommaso Di Noia,</span><span>&nbsp;</span><span>Professor Fabrizio Festa</span></p>\r\n<p style=\"text-align: left;\"><a href=\"https://forum.ircam.fr/profile/fabriziofesta/\" target=\"_blank\">Biography Fabrizio Festa</a></p>\r\n<p style=\"text-align: left;\"><a href=\"https://forum.ircam.fr/profile/sand/\" target=\"_blank\">Biography Tommaso Colafiglio</a></p>\r\n<p style=\"text-align: left;\"><a href=\"https://forum.ircam.fr/profile/tommasodinoia/\" target=\"_blank\">Biography Tommaso Di Noia</a></p>\r\n<p><strong></strong></p>\r\n<p><strong>Abstract</strong></p>\r\n<p>This research explores a system that integrates Brain-Computer Interfaces (BCIs) with our proprietary advanced machine learning and deep learning models. The system generates real-time images and audio textures based on the emotional and cognitive states of two users within a biofeedback protocol. By employing AI models trained to classify emotional polarity and mental states such as Focus, Relaxation, Stress, and Workload, this system provides a comprehensive understanding of user cognition and emotion.</p>\r\n<p><strong>Applications</strong></p>\r\n<p><strong>1) Sound and Musical Composition</strong></p>\r\n<p>The system introduces a method for sound and musical composition. It analyzes brain signals in real-time to produce musical textures that align with the user&rsquo;s mental state. During live performances, the interaction between musicians can be monitored to create dynamic auditory or visual feedback. This feedback fosters an engaging dialogue between human creativity and AI-driven responses, enriching the artistic process.</p>\r\n<p><strong>2) Visual Arts</strong></p>\r\n<p>The methodology also extends to visual arts, enabling the generation of dynamic images or videos that synchronise with specific emotional states. Such capabilities pave the way for interactive installations that evolve based on audience engagement, providing new avenues for artistic experimentation and creative expression.</p>\r\n<p><strong>System Workflow</strong></p>\r\n<ul>\r\n<li><strong>Emotion and Mental State Recognition<span>&nbsp;</span></strong>- BCIs capture real-time brain signals from two users. Proprietary AI algorithms analyse these signals to identify emotional and cognitive states.</li>\r\n<li><strong>Customised Image Generation<span>&nbsp;</span></strong>- The system encodes detected emotions into personalised visual outputs that reflect the user&rsquo;s emotional state.</li>\r\n<li><strong>Multimedia Emotional Synchronization</strong><span>&nbsp;</span>- Visual and auditory content dynamically adapts to the user&rsquo;s emotional state, providing an immersive multisensory experience.</li>\r\n<li><strong>Dynamic Audio Textures<span>&nbsp;</span></strong>- Real-time audio signals are generated and modulated according to the user&rsquo;s emotions and mental states, enhancing the overall sensory impact.</li>\r\n<li><strong>Interactive Scriptwriting</strong><span>&nbsp;</span>- Collaborative narratives are shaped by users&rsquo; emotional data, allowing them to actively influence story development in real-time.</li>\r\n</ul>",
        "topics": [
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1846,
                "name": "BCI",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 670,
                "name": "Deep learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2579,
                "name": "Emotion Recognition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1125,
                "name": "multimedia",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2578,
                "name": "Musical Composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2580,
                "name": "RealTime Generation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 62605,
            "forum_user": {
                "id": 62538,
                "user": 62605,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/ff15.jpeg",
                "avatar_url": "/media/cache/b1/9e/b19ed8c272ef1c91baf4049d3748e117.jpg",
                "biography": "Fabrizio Festa is a composer, conductor and Music and Sound Designer. He has been a researcher in applied to music computer science for many years. His work as a composer has involved him in various fields: classical (opera, ballet, symphonic, chamber music) to Jazz and applied music, from soundtracks for theatre, cinema and television to radio productions. Its pages, both symphonic and chamber music, have been performed in the United States, Canada, Central and South America (Mexico, Chile, Argentina, Brazil, Peru), in Europe (Russia, Great Britain, Holland, Germany, France, Spain, Norway, Belgium, Greece, Denmark, Sweden, Lithuania) and AsiHe is dedicating himself to research in computer science applied to music. Two sectors are mainly devoted to 1) sonic topology and computational sonology to realise specific software for different sound mapping goals and 2) artificial intelligence applied to assisted composition and performance. In this field, he has conducted research in deep learning and neural control devices (BCI).He is a member of AIMI (Italian Association of Musical Informatics), SIMC (Italian Society of Contemporary Music), Saggiatore Musicale, and Athena Musica.",
                "date_modified": "2026-02-25T17:14:46.663936+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fabriziofesta",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "emotion-driven-audio-and-multimedia-generation-using-brain-computer-interfaces-and-deep-learning",
        "pk": 3241,
        "published": true,
        "publish_date": "2025-03-14T11:37:38+01:00"
    },
    {
        "title": "Between ambisonic field recordings, speaker array development and concert organizing by Hans-Gunter Lock",
        "description": "Creating an ambisonic studio and spatial sound concerts in Estonia. Experiences with ambisonic field recording practice.",
        "content": "<p><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p>Since 2021 the author has dedicated his creative and organizational activities to popularize spatial sound in Estonia, mainly using ambisonic technology. At the beginning was the development of a speaker array system with 23 budget semi-professional loudspeakers, which can be deployed from the studio and moved and installed in a black box type venue with versatile hanging options. The author is interested in a variety of creative ways to use the ambisonic technology.</p>\r\n<p>The author has explored the creative potential of this technology in several ways. The obvious choice was the creation of acousmatic electroacoustic fixed-media pieces based on synthesized as well as recorded and electronically processed sounds.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/318a7ce938369829c25f63f4380d107b.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>But the author was particularly interested in the topic of spatial field recording with sufficient resolution in space. The Zylia ZM-1 3<sup>rd</sup> order ambisonic microphone has proven itself very well for this purpose, which allows it to capture soundscape situations preserving all the directional information from the recording location.</p>\r\n<p>In urban situations the careful choice of the recording spots has been essential finding soundscape situations with sound sources from as many different directions as possible. There have been meaningful and iconic soundscape spots like a pedestrian wooden bridge beside a railway bridge in the city of Helsinki, or a crossroad passing by several types of trams in Tallinn. A different sounding environment has been found in Spring with bird songs from a small island in the Gulf of Finland or from a manor park in Central Estonia.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c2ffc5fa64347730f8572fa4b351560e.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>Even more different sonic impressions can be found in the recordings from Benin in West Africa: The early morning with wild and domestic birds and a thunderstorm, both in the town of Grand-Popo, and as well as numerous ambisonic recordings of the rhythmically complex traditional music from several village in the same geographic region. The ambisonic Zylia recordings gave some possibility of post processing sound source separation, which was helpful for making transcriptions of this aural cultural heritage.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/10833e35c172f636333456e514579776.jpeg\" /></p>\r\n<p>Until present numerous concerts with the speaker system of the author have been organized. As the author is teaching at the Estonian Academy of Music and Theatre, workshops and concerts for the students have been regularly organized. Through the topic of spatial sound the relatively small Estonian community of academic electroacoustic music has been bound to the sound-art scene and underground scene. The ambisonic sound system has regularly been part of the already 10 years existing bi-annual &Uuml;le Heli festival.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1cadae553bd87f955df792b0acad29e1.jpg\" /></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 18069,
            "forum_user": {
                "id": 18063,
                "user": 18069,
                "first_name": "Hans-Gunter",
                "last_name": "Lock",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6361c326c827f0906ce4f133e27bcfc6?s=120&d=retro",
                "biography": "Hans-Gunter Lock was born in 1974 in Halle (Germany) and has lived since 2000 in Estonia. He is teaching and working both at the Estonian Academy of Music and Theatre and at the Estonian Academy of the Arts. His creative work contains electroacoustic music and acoustic compositions for various chamber music ensembles. In the field of electroacoustic composition spatialization has been an integral part of his artistic output. Working for a long time with 8-speaker systems he recently prefers higher order ambisonic technology. Therefore he designed his own 3-dimensional 23-speaker multichannel sound system, which is set up in his studio or can be moved for concerts into appropriate venues. Hans-Gunter Lock focuses on microtonal pitch organization systems regarding their specific melodic and harmonic features, composing with the Bohlen-Pierce scale and with 22 equal divisions of the octave. Engaging musicians to play microtonal he has employed specialized instruments like the Bohlen-Pierce clarinet, a microtonally refretted guitar, building specialized instruments by himself (modified recorders, tubular bells), and also creating specialized intonation exercises for flexible pitch inst",
                "date_modified": "2026-02-28T14:08:34.106224+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1276,
                        "forum_user": 18063,
                        "date_start": "2026-01-03",
                        "date_end": "2027-01-03",
                        "type": 0,
                        "keys": [
                            {
                                "id": 1112,
                                "membership": 1276
                            },
                            {
                                "id": 1114,
                                "membership": 1276
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "hans-gunter-lock-art",
            "first_name": "Hans-Gunter",
            "last_name": "Lock",
            "bookmarks": []
        },
        "slug": "between-ambisonic-field-recordings-speaker-array-development-and-concert-organizing-spatial-sound-activities-in-estonia",
        "pk": 4389,
        "published": true,
        "publish_date": "2026-02-18T22:41:46+01:00"
    },
    {
        "title": "GRM Tools Atelier workshop by Matthias Puech",
        "description": "GRM Tools Atelier is the new line of audio tools from INA GRM. It is a real-time, multichannel sound processing and synthesis environment, designed as a workbench for instrument creation.",
        "content": "<p><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/3b4c214ee38f500dca7ca271bd0fb86f.png\" /></p>\r\n<p>&nbsp;</p>\r\n<p>We present <em>GRM Tools Atelier</em>, the new line of audio tools from INA GRM, continuing a development program initiated in 1990 by Hugues Vinet. <em>Atelier</em> is a complete real-time, multichannel sound processing and synthesis environment, designed as a workbench for instrument creation. Its interface makes it usable both in live conditions and as a sound material generator for sound design and generative composition.&nbsp;</p>\r\n<p>After a historical overview and a detailed presentation of the tool and its principles, we will focus on a few sound-design- and music-oriented tasks, and propose to solve them entirely with <em>Atelier</em>, interactively with the participants, to explore its interface and possibilities.</p>",
        "topics": [
            {
                "id": 4267,
                "name": "atelier",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4266,
                "name": "grm tools",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 369,
                "name": "Multichannel",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 231,
                "name": "Plugin",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 106,
                "name": "Software",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 209,
                "name": "Sound processing and manipulation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1096,
                "name": "workshop",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 44232,
            "forum_user": {
                "id": 44174,
                "user": 44232,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_1061.jpg",
                "avatar_url": "/media/cache/80/98/8098653e48d0701242cb8b37f8d72948.jpg",
                "biography": "Matthias Puech is Head of Research and Development at INA GRM, lead developer of the GRM Tools suite, and creator of GRM Tools Atelier. He holds a PhD in Computer Science and is a former assistant professor at CNAM Paris. As a researcher, his interests lie in programming languages, real time and embedded systems, and audio DSP. As a composer, his music has been published by the Hands in the Dark and Hallow Ground labels, and performed in Europe, notably at Café OTO (London, UK), Sonic Acts (Amsterdam, NL), gnration (Braga, PT), C/O Gallery (Berlin, DE), Kaserne (Basel, CH) and Akousma (Paris, FR). He has received commissions from Festival Présences, France Musique (\"Création Mondiale\") and France Culture (\"L'Expérience\").",
                "date_modified": "2026-02-27T11:57:22.522614+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 461,
                        "forum_user": 44174,
                        "date_start": "2023-06-02",
                        "date_end": "2024-06-02",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "mqtthiqs",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "grm-tools-atelier-workshop",
        "pk": 4384,
        "published": true,
        "publish_date": "2026-02-18T16:06:31+01:00"
    },
    {
        "title": "Symbiosis",
        "description": "Résidence en recherche artistique 2018.19\r\nÉric Raynaud aka Fraction.\r\nAu sein de l'équipe Espaces acoustiques et cognitifs de l'Ircam-STMS et à la Société des Arts Technologiques (SAT).",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">R&eacute;sidence en recherche artistique 2018.19</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p><strong>Symbiosis</strong><br />Au sein de l'&eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac/\">Espaces acoustiques et cognitifs</a><span>&nbsp;</span>de l'Ircam-STMS et &agrave; la<span>&nbsp;</span><a href=\"http://sat.qc.ca/\" target=\"_blank\"><span>Soci&eacute;t&eacute; des Arts Technologiques</span></a><span><span>&nbsp;</span>(SAT).</span></p>\r\n<p><span>Symbiosis est un projet de recherche-cr&eacute;ation d'une performance qui explore le potentiel de la spatialisation sonore comme vecteur audio-r&eacute;actif <span>en</span> temps r&eacute;el pour la synth&egrave;se visuelle g&eacute;n&eacute;rative dans un environnement audio-visuel immersif.&nbsp;</span></p>\r\n<p>Ce projet vise &agrave; cr&eacute;er une performance immersive, mettant en sc&egrave;ne<span><span>&nbsp;</span>l'id&eacute;e d'unit&eacute; des mat&eacute;riaux audiovisuels, en utilisant la mati&egrave;re sonore spatialis&eacute;e pour sculpter la mati&egrave;re g&eacute;n&eacute;rative, en 360&deg;. Il implique un aspect de recherche-d&eacute;veloppement sur le potentiel cr&eacute;atif de la spatialisation sonore 3D &agrave; partir du SPAT Ircam comme vecteur audio-r&eacute;actif temps r&eacute;el pour la synth&egrave;se visuelle g&eacute;n&eacute;rative dans Touch Desginer ou Max-Jitter dans une environnement audio-visuel immersif (satosph&egrave;re, SAT). Ce projet propose en s'appuyant sur l'esquisse de cette performance, d'aborder la question des interactions audiovisuelles sous une approche analytique du signal sonore dans le domaine 3D, de mettre en place une m&eacute;thode et de cr&eacute;er des outils-jonctions &agrave; des fins de cr&eacute;ation artistique pour la performance.<br /></span></p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h6 class=\"dotted\"></h6>\r\n<h1 class=\"dotted\">&Eacute;ric Raynaud</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202019/.thumbnails/eric_raynaud.jpg/eric_raynaud-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Fraction, de son vrai nom &Eacute;ric Raynaud, est un artiste audiovisuel n&eacute; en Bretagne et r&eacute;sidant &agrave; Paris. <span>&Eacute;ric axe sa recherche plus sp&eacute;cifiquement vers les formes d&rsquo;immersion sonores et leurs interactions avec les m&eacute;dias visuels</span>. Ses premi&egrave;res production apparaissent sous le label Allemand Shitkatapult avant de rejoindre le label parisien Infin&eacute; en 2008.&nbsp;<span>En parall&egrave;le, sa carri&egrave;re va sur le terrain des arts num&eacute;riques. Il s&rsquo;int&eacute;resse notamment au travail impliquant des sc&eacute;nographies complexes et une &eacute;criture num&eacute;rique hybride pouvant associer des m&eacute;dias visuels, sonores et physiques, avec l&rsquo;utilisation d&rsquo;outils informatiques ou digitaux.&nbsp;</span>Avec le support du CNC-Dicream (2010), il con&ccedil;oit la performance audiovisuelle immersive<span>&nbsp;</span><em>DROMOS -</em>&nbsp;d&rsquo;abord pour le festival Elektronik puis dans sa forme finale pour le festival Mutek (Montreal) - qui, de son succ&egrave;s, a &eacute;t&eacute; reprise par Apple dans son clip anniversaire 30 en 2014. Cette m&ecirc;me ann&eacute;e, il cr&eacute;&eacute; &eacute;galement<span>&nbsp;</span><em>ObE</em>, une installation immersive singuli&egrave;re relay&eacute;e par The Creators Projects et de nombreux autres m&eacute;dias.</p>\r\n<p>Dans la foul&eacute;e de ces deux &oelig;uvres ambitieuses, il vise en particulier &agrave; tisser des liens entre l&rsquo;immersion sonore 3D, l&rsquo;art contemporain et l&rsquo;architecture, avec un int&eacute;r&ecirc;t particulier pour des probl&eacute;matique associant science et enjeux environnementaux. Au centre de sa cr&eacute;ation, le son avec lequel il exp&eacute;rimente au quotidien sa capacit&eacute; &agrave; guider l&rsquo;&eacute;criture d&rsquo;oeuvres singuli&egrave;res. Il joue ainsi avec son caract&egrave;re spatial, physique et &eacute;motionnel afin de concevoir des pi&egrave;ces atypiques pla&ccedil;ant au centre de ses pr&eacute;occupations l&rsquo;experience de la &laquo; physicalit&eacute; &raquo; imm&eacute;diate de l&rsquo;espace.</p>\r\n<p>En 2014, il est laur&eacute;at de la bourse Institut Fran&ccedil;ais France/Quebec/Arts digital qui lui permet, en r&eacute;sidence &agrave; la Soci&eacute;t&eacute; des Arts Technologiques, de poursuivre un travail de recherche/cr&eacute;ation sur la spatialisation ambisonics en temps r&eacute;el et l&rsquo;interaction avec les nouveaux m&eacute;dias. Tir&eacute; de ce travail de recherche, il pr&eacute;sente en 2015 une premi&egrave;re forme du projet<span>&nbsp;</span><em>Entropia</em>, associant projection sonore ambisonie ordre 4, pixel mapping et projection 360&deg;, couvert également par les Creators Projects. Aboutissement de plusieurs mois de travail, cette derni&egrave;re cr&eacute;ation est &eacute;galement adapt&eacute;e en installation dans un format pouvant &ecirc;tre pr&eacute;sent&eacute; dans de multiples lieux.</p>\r\n<p>En 2016, il est invit&eacute; en residence au sein du prestigieux Spatial Sound Institute cr&eacute;&eacute; par 4DSound à Budapest (Hongrie) qui prend la forme d&rsquo;un syst&egrave;me de diffusion sonore unique au monde o&ugrave; il re-visite l&rsquo;&oelig;uvre atypique de Xenakis,<span>&nbsp;</span><em>Persepolis</em>, et qui l&rsquo;accueillera &agrave; nouveau en 2018 pour le projet<span>&nbsp;</span><em>Bardo</em>. Il est également lauréat SHAPE 2017, un dispositif européen de 16 festivals et centre d&rsquo;arts qui qui soutient des artistes musiciens et pluridisciplinaire aux démarches innovantes en Europe.</p>\r\n<p>Son travail a &eacute;t&eacute; pr&eacute;sent&eacute; dans de nombreux lieux, festivals ou &eacute;v&egrave;nements de culture &eacute;lectroniques, exp&eacute;rimentales, digitales ou audiovisuelles d'envergure nationale ou internationale comme le festival MIRA, MUTEK, GogBot, MEQ, Maintenant, Sonica, Lab360, Z-KU, Gaiet&eacute; Lyrique, SAT Montreal, Resonate, Kikk, etc.</p>\r\n<h6></h6>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://www.fractionmusic.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.fractionmusic.com/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "symbiosis",
        "pk": 24,
        "published": true,
        "publish_date": "2019-03-21T16:05:13+01:00"
    },
    {
        "title": "All Answers Forbidden: From RAVE Models to a Virtual Singer by Bengisu Önder",
        "description": "The exploration of AI systems in artistic practice raises fundamental questions of intention and control. This presentation examines how the strengths of neural networks such as RAVE can be shaped within a compositional framework to construct an expressive virtual voice.",
        "content": "<div>\r\n<p><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p>The presentation shows how the strengths and possibilities of neural networks like RAVE can be understood and shaped in order to reveal their poetic potential when guided by artistic intention toward an unusual and expressive result. <em>All Answers Forbidden</em> is conceived as a sonic space of interaction between two sides of a duality: a soprano and a column of five loudspeakers named Totem, representing her alter ego. The voice of Totem was built from RAVE models I trained on recordings of the soprano, along with a few public ones. By exporting versions at different training stages, I obtained sounds evolving from fragmented and unstable to more coherent and lyrical. This transformation becomes the expressive core of Totem. I use these instabilities to form a trajectory of becoming, tracing the voice&rsquo;s growth from noise to lyric intensity and toward a kind of machine emotionality.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/13a4eaafc67d385f7215728a843c2ec7.png\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/30dd92b5c66098698ff45a64035ad2bf.png\" /></p>\r\n</div>",
        "topics": [
            {
                "id": 1774,
                "name": "neural synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4270,
                "name": "virtual singer",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18541,
            "forum_user": {
                "id": 18534,
                "user": 18541,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Screenshot_2026-02-19_at_4.12.10_PM.png",
                "avatar_url": "/media/cache/e2/d7/e2d7e7721a19afd598917a4a73ac496d.jpg",
                "biography": "Bengisu Önder is a Paris-based composer whose work bridges cutting-edge technologies and the human, emotional core of sound. Influenced by world events and research in perception and psychoacoustics, she creates layered, immersive works through virtuosic instrumental writing and electronics.\n\nShe studied composition at HMDK Stuttgart with Marco Stroppa and at CNSMDP with Frédéric Durieux, alongside electronic music studies with Yan Maresz, Luis Naon, and Grégoire Lorieux. \n\nHer mixed works have been presented at the IRCAM Forum in Paris, ZKM next_generation in Karlsruhe, and the Tehran International Electronic Music Festival. Since 2021, she has been a teaching assistant at HMDK Stuttgart’s electronic music studio.\n\nShe is currently pursuing a Konzertexamen in Computermusik at HMDK Stuttgart with Marco Stroppa and Piet Johan Meyer, and a Master’s in Musicology at Université Paris 8 under the supervision of Alain Bonardi. \n\nSince 2026, she has been composer-in-residence with Ensemble Court-Circuit.",
                "date_modified": "2026-02-23T10:53:08.621365+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Bengisu",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "all-answers-forbidden-from-rave-models-to-a-virtual-singer",
        "pk": 4388,
        "published": true,
        "publish_date": "2026-02-18T20:02:33+01:00"
    },
    {
        "title": "Somax 2 Goes Audio ! [New Release]",
        "description": "Somax 2.4 has just been released. Now learned audio material can react in a structural and co-creative way to your improvisation or musical source.`\r\nPresentation Thursday march 24 in Studio 5 at 14:30.",
        "content": "<div>\r\n<div>\r\n<div>\r\n<div>\r\n<p>Somax 2 is a multi-agent interactive system performing machine co-improvisation with live musicians, based on machine-listening, machine-learning, and generative processes.&nbsp;</p>\r\n<p>Agents provide stylistically coherent improvisations based on learned musical knowledge while continuously listening to and adapting to input from musicians or other agents in real time. The system is trained on any musical materials chosen by the user, effectively constructing a generative model (called a corpus), from which it draws its musical knowledge and improvisation skills. Corpora, inputs and outputs can be MIDI as well as audio, and inputs can be live or streamed from Midi or audio files.</p>\r\n<p>Somax 2 is one of the improvisation systems descending from the well-known Omax software, presented here in a totally new implementation. As such it shares with its siblings, the general loop [listen/learn/model/generate], using some form of statistical modeling that ends up in creating a highly organized memory structure from which it can navigate into new musical organizations, while keeping style coherence, rather than generating unheard sounds as other ML systems do.</p>\r\n<p>However Somax 2 adds a totally new versatility by being incredibly reactive to the musician decisions, and by putting its creative agents to communicate and work together in the same way, thanks to cognitively inspired interaction strategies and finely optimized concurrent architecture that make all its units smoothly cooperate together.</p>\r\n<p>Somax 2 allows detailed parametric controls of its players and can even be played alone as an instrument in its own right, or even used in composition workflow. It is possible to listen to multiple sources and to create entire ensembles of agents where the user can control in detail</p>\r\n<p><a href=\"https://www.stms-lab.fr/projects/pages/somax2\">https://www.stms-lab.fr/projects/pages/somax2</a></p>\r\n<p><a href=\"https://forum.ircam.fr/projects/detail/somax-2/\">https://forum.ircam.fr/projects/detail/somax-2/</a></p>\r\n<p></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 748,
                "name": "co-creativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 749,
                "name": "Creative AI",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1651,
                "name": "Improvisation, générativité et interactions co-créatives",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 750,
                "name": "Multi-Agent system",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 583,
                "name": "Omax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "somax-2-goes-audio-new-release",
        "pk": 1134,
        "published": true,
        "publish_date": "2022-03-23T18:44:49+01:00"
    },
    {
        "title": "Obtenir des conseils pour comprendre sélectionner acheter, des équipements de capture enregistrement traitement du signal ré-écoute, adaptés et de très haute qualité",
        "description": "Pour un projet personnel privé, je souhaite obtenir des conseils pour comprendre sélectionner acheter, des équipements de capture enregistrement traitement du signal ré-écoute, adaptés à mes objectifs dans décrits dans l'article (très haute qualité, sensibilité, ...) .",
        "content": "<p>Bonjour,</p>\n<p>J'ai peu d'exp&eacute;rience dans le domaine du son. Mais, j'ai une formation scientifique (DUT Mesures Physiques) et plus qui permettra d'&eacute;changer avec plus de facilit&eacute; avec des exp&eacute;riment&eacute;s du son.</p>\n<p>J'ai d&eacute;j&agrave; utilis&eacute; (basiquement) un zoom H6 avec bonnette poilue mais sans accessoire (perche, ...) pour les objectifs ci-dessous. Je ne pense pas que ce soit l'&eacute;quipement le plus adapt&eacute;.</p>\n<p>Je cherche :<br>* &agrave; enregistrer tous les sons transmis par l'air, dans tous les environnements possibles (int&eacute;rieurs, ext&eacute;rieures de type pleine nature &agrave; urbains), sans cibler d'activit&eacute;s particuli&egrave;res,<br>* de mani&egrave;re omnidirectionnelle,<br>* &agrave; enregistrer les fr&eacute;quences les plus basses aux plus hautes, y compris celles &agrave; la limite de l'audible,<br>* &agrave; les s&eacute;lectionner/filtrer afin de les r&eacute;-&eacute;couter distinctement du reste de la gamme de fr&eacute;quences,<br>* des &eacute;quipements offrant le plus de compatibilit&eacute; possible avec d'autres &eacute;quipements, dont je pourrais avoir besoin me permettant d'&eacute;tendre ma gamme de fonctionnalit&eacute;s,<br>* satisfaire la tr&egrave;s haute qualit&eacute;, fid&eacute;lit&eacute;-reproduction, r&eacute;solution, sensibilit&eacute; (de la mesure &agrave; l'enregistrement, du traitement du signal &agrave; la r&eacute;-&eacute;coute),&nbsp;<br>* et alors aller chercher les &eacute;quipements dans les domaines industriels et de recherches, voire professionnels, mais moins grands publics,<br>* sur des temps d'enregistrements courts &agrave; longs (jusqu'&agrave; 8h),<br>* avec une mobilit&eacute; possible mais non indispensable,<br>* je souhaite payer le juste prix, et le crit&egrave;re du budget est &agrave; prendre en compte apr&egrave;s tous ceux ci-dessus.</p>\n<p><br>Alors, vers quels &eacute;quipements m'orienteriez-vous, et vers quels conseils/organismes m'orienteriez-vous ?</p>\n<p>Je remercie toute participation qui est la bienvenue.</p>",
        "topics": [],
        "user": {
            "pk": 49977,
            "forum_user": {
                "id": 49917,
                "user": 49977,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/0b591c63b90de2edd7ef8ae916b16125?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "djailz",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "obtenir-des-conseils-pour-comprendre-selectionner-acheter-des-equipements-de-capture-enregistrement-traitement-du-signal-re-ecoute-adaptes",
        "pk": 2540,
        "published": false,
        "publish_date": "2023-09-05T14:31:16.640321+02:00"
    },
    {
        "title": "Fabrizio di Salvo / Hochschule der Künste Bern",
        "description": "",
        "content": "<p style=\"text-align: center;\"><img src=\"/media/uploads/Ateliers du Forum Paris 2020/fabrizio_di_salvo.jpg\" alt=\"\" width=\"435\" height=\"244\" /></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: justify;\">Fabrizio Di Salvo est n&eacute; en Suisse et est d'origine italienne. Ses &oelig;uvres se situent aux fronti&egrave;res de la musique exp&eacute;rimentale, de la composition contemporaine, des installations sonores et de l'art sc&eacute;nique, se concentrant sur des concepts qui examinent des mod&egrave;les entre la politique et la vie sociale. Il a obtenu une licence en art sonore et m&eacute;diatique &agrave; la Hochschule der K&uuml;nste Bern et un dipl&ocirc;me d'ing&eacute;nieur du son. Il a particip&eacute; &agrave; des ateliers et des master classes avec Cathy van Eck, Gilbert Nouno, Helmut Lachenmann, Malcolm Braff, Angela K&ouml;rfer-B&uuml;rger, Stefan Prins, Simon Steen Andersen, Dr. Johannes S. Sistermanns, Teresa Carrasco et Urs Peter Schneider entre autres. En tant que compositeur, chor&eacute;graphe et artiste sonore, ses &oelig;uvres ont &eacute;t&eacute; pr&eacute;sent&eacute;es &agrave; Milano Musica, Theater Basel, Theater Rote Fabrik Z&uuml;rich, Theater Paco Rabal Madrid, Theater Conde Duque Madrid, Theater Roxy Birsfelden, Neues Theater Dornach, Tanztage Berlin Sophiens&aelig;le, Schwankhalle Bremen, M&uuml;nchner Kammerspiele, Museum der Kulturen Basel, Landesmuseum Z&uuml;rich, Kunsthalle Winterthur, Kunsthaus Baselland, Kunstmuseum La Chaux-de-Fonds, Arcaden Gallery Berlin, Fondation l'Abri Gen&egrave;ve, Interdans Festival Belgium, Les Digitales Festival Bern, Neu Bad Luzern, Dampfzentrale Bern i.a.</p>\r\n<p style=\"text-align: justify;\">Il suit actuellement le Master en pratique artistique contemporaine avec mineur en composition &agrave; la Hochschule der K&uuml;nste Bern. Il se consid&egrave;re comme un bricoleur et s'ali&egrave;ne &agrave; la fois les mat&eacute;riaux et les compositions &agrave; la recherche d'une exp&eacute;rience coh&eacute;rente et subjective. Cela peut conduire &agrave; des installations sonores, de nouveaux instruments, des chor&eacute;graphies ou des compositions et les r&eacute;sultats peuvent &ecirc;tre consid&eacute;r&eacute;s comme hautement interdisciplinaires. Son travail l&eacute;vite entre le visible et l'invisible, le chor&eacute;graphi&eacute; et le quotidien, le silencieux et le bruyant. La fragilit&eacute; est l'exp&eacute;rience du sensible, du compatissant, de l'emphatique et de la force profonde. Cette compr&eacute;hension est au c&oelig;ur de son travail et de son exp&eacute;rience, en tant que moyen le plus important de sa pratique artistique. Une exp&eacute;rience qui n'est pas subordonn&eacute;e &agrave; un but, mais qui c&eacute;l&egrave;bre le moment de joie entre les uns et les autres et qui peut donc &ecirc;tre per&ccedil;ue comme une impulsion cr&eacute;ative et le point de d&eacute;part de chaque &oelig;uvre.</p>\r\n<p><strong style=\"font-size: 1.125rem;\"></strong></p>\r\n<p><strong style=\"font-size: 1.125rem;\">theoneandthemany</strong></p>\r\n<p>Fabrizio Di Salvo: &nbsp;concept, r&eacute;alisation, cr&eacute;ation<br />Sol Bilbao Lucuix : performance, cr&eacute;ation<br />Seven Chosen : Texte - \"The Conscience of a Hacker&ldquo; par Loyd Blankenship 1986</p>\r\n<h6></h6>\r\n<p style=\"text-align: justify;\">theoneandthemany est une pi&egrave;ce pour une interpr&egrave;te et une table. En connectant un senseur de mouvement, la table devient un bouton g&eacute;ant. En tournant et en inclinant la table, un texte est jou&eacute; d'avant en arri&egrave;re. Ce qui permet &agrave; l'interpr&egrave;te de manipuler ce texte comme une bandes magn&eacute;tiques.</p>\r\n<p style=\"text-align: justify;\">La pi&egrave;ce emm&egrave;ne le public dans un voyage vers les m&eacute;canismes qui se cachent derri&egrave;re les chambres d'&eacute;cho d'Internet et explore la v&eacute;racit&eacute; et la construction de la v&eacute;rit&eacute; d'aujourd'hui. Cet objet musical est interpr&eacute;t&eacute; par la danseuse Sol Bilbao Lucuix.</p>\r\n<p style=\"text-align: justify;\">Le terme \"Echo chamber\" qui fait r&eacute;f&eacute;rence &agrave; des situations dans les r&eacute;seaux sociaux, o&ugrave; des types sp&eacute;cifiques d'opinions et de convictions sont renforc&eacute;s et diffus&eacute;s par la r&eacute;p&eacute;tition et la communication continue entre les utilisateurs qui partagent le m&ecirc;me type de pens&eacute;es &agrave; l'int&eacute;rieur d'un syst&egrave;me ferm&eacute;. Dans le contexte politique, en effet, la \"chambre d'&eacute;cho\" alimente et prolif&egrave;re, par des r&eacute;p&eacute;titions, des tendances sp&eacute;cifiques, g&eacute;n&eacute;rant la polarisation des id&eacute;ologies politiques parmi les personnes, du fait que leurs propres convictions et orientations sont approuv&eacute;es et confirm&eacute;es par des personnes ayant des vues similaires.</p>",
        "topics": [
            {
                "id": 319,
                "name": "Dance",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 318,
                "name": "Experimtental music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 317,
                "name": "Glitch",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 320,
                "name": "Hacker",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2,
                "name": "MaxMSP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 100,
                "name": "Sensor",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5711,
            "forum_user": {
                "id": 5708,
                "user": 5711,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d035dbf59caa6a09db4c3cd97622c517?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Fabrizio",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-one-and-the-many",
        "pk": 510,
        "published": true,
        "publish_date": "2020-02-12T11:51:56+01:00"
    },
    {
        "title": "Measuring the environmental impacts of XR: from a blank page to a forecast to 2030 - Landia Egal",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>The importance and challenges of assessing the environmental impacts of XR today and in the medium term: the state of the art, the missing bricks, what can be done, the current efforts and the importance for all professionals to have a good understanding of these challenges and risks</p>",
        "topics": [],
        "user": {
            "pk": 39084,
            "forum_user": {
                "id": 39032,
                "user": 39084,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Photo_Landia_Egal.jpg",
                "avatar_url": "/media/cache/b4/84/b48431d4f9467a7224c542c5eb07ff9c.jpg",
                "biography": "Landia Egal is an immersive director and the founder of the production company Tiny Planets. She is committed to the creation of more sustainable and equitable narratives and imaginaries that can have an influence on reality. Tiny Planets is also leading a challenging and ambitious research project aiming to assess the environmental impacts of the XR sector, from today to 2030.",
                "date_modified": "2023-02-23T11:36:10+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "landiaegal",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "measuring-the-environmental-impacts-of-xr-from-a-blank-page-to-a-forecast-to-2030",
        "pk": 2080,
        "published": true,
        "publish_date": "2023-02-24T11:46:57+01:00"
    },
    {
        "title": "Nouvelles d'Anemond Studio : Factorsynth 3, Factoid 2 et projets à venir - J.J Burred",
        "description": "Discussion et démonstration des dernières versions des outils d'Anemond pour la transformation du son basée sur la factorisation matricielle.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par : J.J Burred&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/jjburred/\">Biographie</a></p>\r\n<p></p>\r\n<p>Anemond est un studio de logiciels musicaux bas&eacute; &agrave; Paris, fond&eacute; par le chercheur ind&eacute;pendant J.J. Burred. Anemond d&eacute;veloppe et distribue Factorsynth (logiciel partenaire du Forum IRCAM) et Factoid, deux outils qui d&eacute;construisent les sons en composants en utilisant une technique d'analyse de donn&eacute;es appel&eacute;e factorisation matricielle. Une fois les composants extraits automatiquement, l'utilisateur peut les recombiner manuellement ou al&eacute;atoirement pour cr&eacute;er des transformations sonores complexes, telles que des modifications de rythme et de m&eacute;lodie et un type particulier de synth&egrave;se crois&eacute;e.<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f15ab869db841c079f08ee8aebf663b6.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Le projet Factorsynth a &eacute;t&eacute; lanc&eacute; en 2013 sous la forme d'un prototype de recherche, qui a d'abord &eacute;t&eacute; utilis&eacute; par des compositeurs affili&eacute;s &agrave; l'IRCAM tels que Maurizio Azzan, Emanuele Palumbo et Mikhail Malt pour la transformation du timbre et la spatialisation. Il a ensuite &eacute;volu&eacute; pour devenir un dispositif commercial Max For Live utilis&eacute; par des compositeurs, des DJ et des concepteurs sonores. En 2023, J.J. Burred a fond&eacute; Anemond pour red&eacute;velopper et distribuer Factorsynth en tant que plugin VST/AU, ainsi qu'en tant qu'application autonome.</p>\r\n<p>Alors que Factorsynth est un studio de conception sonore &agrave; part enti&egrave;re permettant l'&eacute;dition d&eacute;taill&eacute;e et la recombinaison des composants, Factoid est un outil plus l&eacute;ger qui se concentre sur la g&eacute;n&eacute;ration facile de variations m&eacute;lodiques et rythmiques de boucles sur une interface beaucoup plus simple.<img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/8c59f8e02adde2687f219014469fee93.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Dans cette pr&eacute;sentation, J.J. discutera et fera la d&eacute;monstration des it&eacute;rations actuelles des deux outils : la version 3 de Factorsynth, sortie en 2023, et la version 2 de Factoid, sortie en 2024. Ils ne sont pas seulement une r&eacute;impl&eacute;mentation compl&egrave;te des dispositifs Max For Live pr&eacute;c&eacute;dents, mais incluent de nombreuses nouvelles fonctionnalit&eacute;s, telles que le contr&ocirc;le MIDI des composants individuels, le panoramique al&eacute;atoire, le verrouillage des composants et l'exportation individuelle ou globale vers des pistes DAW.</p>\r\n<p>La pr&eacute;sentation se terminera par un aper&ccedil;u des projets &agrave; venir d'Anemond.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a></strong></p>\r\n<p></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 7286,
            "forum_user": {
                "id": 7283,
                "user": 7286,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/jj_piano_cut.jpg",
                "avatar_url": "/media/cache/5f/b2/5fb22be573f4a25cce9f74d0ebcaa098.jpg",
                "biography": "J.J. Burred is a researcher and developer specialized in music technology. He holds a PhD in Engineering from the Technical University of Berlin and has worked as a researcher at IRCAM-Centre Pompidou (Paris) and Audionamix on topics such as source separation, automatic music analysis, sound synthesis and musical applications of machine learning. In 2023 he was a Visiting Scholar at the Center for New Music And Audio Technologies (CNMAT) at the University of California, Berkeley. He has worked with artists and composers such as Marco Stroppa, Holly Herndon, Mat Dryhurst and Ralph Killhertz, and is the founder of the Paris-based music software studio Anemond. On the musical side, he is a classically-trained pianist and has played with jazz and electronic groups.",
                "date_modified": "2024-03-28T15:00:47.188533+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jjburred",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "news-from-anemond-studio-factorsynth-3-factoid-2-and-upcoming-projects",
        "pk": 2736,
        "published": true,
        "publish_date": "2024-02-15T13:09:35+01:00"
    },
    {
        "title": "Choeurs et Thermophones",
        "description": "Le propos est de susciter le besoin de création chez les jeunes musiciens, compositeurs et créateurs de la scène et du spectacle vivant en leur mettant à disposition des orgues thermiques. Ces derniers font entendre un son unique, celui des Thermophones, sur lequel s’accordent voix humaine et chant choral. Les lieux proposés sont les grands lieux résonnants : abbatiales, grandes église ou salles de château des monuments du patrimoine.Il réunira, entre autres des chorales et des interprètes, manipulatrices et manipulateurs de Thermophones.",
        "content": "<p>Choeurs et Thermophones Laur&eacute;at des \"Mondes Nouveaux\" (english below) La proposition de Jacques R&eacute;mus &laquo; Ch&oelig;urs et Thermophones, le chant des orgues thermiques &raquo; a &eacute;t&eacute; retenue par le Comit&eacute; artistique du projet &laquo; Mondes nouveaux &raquo; lanc&eacute; par le Minist&egrave;re de la culture. Les laur&eacute;ats ont &eacute;t&eacute; re&ccedil;us par le Pr&eacute;sident de la R&eacute;publique le 8 novembre &agrave; l&rsquo;Elys&eacute;e.</p>\r\n<p>Le propos est de susciter le besoin de cr&eacute;ation chez les jeunes musiciens, compositeurs et cr&eacute;ateurs de la sc&egrave;ne et du spectacle vivant en leur mettant &agrave; disposition ses orgues thermiques. Ces derniers font entendre un son unique, celui des Thermophones, sur lequel s&rsquo;accordent voix humaine et chant choral.<br />Les lieux propos&eacute;s sont les grands lieux r&eacute;sonnants : abbatiales, grandes &eacute;glise ou salles de chateau des monuments du patrimoine. Il r&eacute;unira, entre autres des chorales et des interpr&egrave;tes, manipulatrices et manipulateurs de Thermophones.</p>\r\n<p></p>",
        "topics": [
            {
                "id": 692,
                "name": "appel",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 641,
                "name": "Choirs",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 640,
                "name": "Thermophones",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 627,
            "forum_user": {
                "id": 627,
                "user": 627,
                "first_name": "Jacques",
                "last_name": "Rémus",
                "avatar": "https://forum.ircam.fr/media/avatars/Jacques_Remus_photo_Marine_Lale_600x600_DSC_7184.png",
                "avatar_url": "/media/cache/87/d5/87d5a3210f1b68fa331488c355189592.jpg",
                "biography": "Jacques Rémus\n\nBiologiste à l'origine (agronome et chercheur en aquaculture), Jacques Rémus a choisi à la fin des années 70, de se consacrer à la musique et à l'exploration de différentes formes de création. Saxophoniste, il a participé à la fondation du groupe Urban-Sax. Il apparaît également dans de nombreux concerts allant de la musique expérimentale (Alan Sylva, Steve Lacy) à la musique de rue (Bread and Puppet). \n\nAprès des études en Conservatoires, G.R.M. et G.M.E.B., il a écrit des musiques pour la danse, le théâtre, le \"spectacles totaux\", la télévision et le cinéma. Il est avant tout l'auteur d'installations et de spectacles mettant en scène des sculptures sonores et des machines musicales comme \"Bombyx\", le \"Double Quatuor à Cordes\", \"Concertomatique\", \"Léon et le chant des mains\", les \"Carillons\" N ° 1, 2 et 3, : « l'Orchestre des Machines à Laver » ainsi que ceux présentés au Musée des Arts Forains (Paris).\n\nDepuis 2014, son travail s'est concentré sur le développement des «Thermophones». La construction d’un orgue mobile de 40 Thermophones de 5ème génération a permis de créer en 2023 le spectacle-concert « Chœurs et Thermophones »",
                "date_modified": "2025-12-05T12:05:16.942583+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 69,
                        "forum_user": 627,
                        "date_start": "2025-12-05",
                        "date_end": "2026-12-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 344,
                                "membership": 69
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "REMUS",
            "first_name": "Jacques",
            "last_name": "Rémus",
            "bookmarks": []
        },
        "slug": "choeurs-et-thermophones",
        "pk": 1043,
        "published": true,
        "publish_date": "2022-01-25T17:38:37+01:00"
    },
    {
        "title": "Plane of Emergence by Jan Ove Hennig",
        "description": "This project explores emergence and becoming as philosophical forces, where musical patterns develop from devices interacting on a plane of pure potential, free from hierarchical determination.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<h4><img src=\"https://forum.ircam.fr/media/uploads/jan_ove_hennig_-_plane_of_emergence_render.png\" alt=\"\" width=\"894\" height=\"384\" /></h4>\r\n<p></p>\r\n<p>Presented by : Jan Ove-Hennig</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/kabuki/\" target=\"_blank\">Biography</a></p>\r\n<h3>Summary</h3>\r\n<p>\"Plane of Emergence\" is an experimental sound system where autonomous devices generate and interpret musical sequences based on their proximity to each other, creating an ever-evolving musical landscape. Rather than simply copying sequences, each device transforms musical input through variation, establishing temporary sonic territories that continuously form and dissolve.</p>\r\n<h3><strong>Conceptual Framework</strong></h3>\r\n<p>Drawing on theories of emergent systems, the project explores how musical patterns develop naturally through machine interactions, independent of central control or predetermined rules. Sonic territories exist in constant tension between two forces: territorialization, where patterns become established and devices develop interpretative habits, and deterritorialization, where these patterns transform and unexpected variations emerge to destabilize existing forms.</p>\r\n<p>Key dynamics:</p>\r\n<ul>\r\n<li>Continuous transformation between stability and instability</li>\r\n<li>Simultaneous processes of pattern formation and dissolution</li>\r\n<li>Emergence of complex behaviors from simple interactions</li>\r\n</ul>\r\n<h3>The Plane of Immanence</h3>\r\n<p>As the title suggests, the project explores Deleuze's concept of the plane of immanence - a fundamental level of existence prior to all forms and structures. This plane functions as a field of virtual possibilities before they're actualized, like potential musical interactions before specific patterns emerge. Without external organizing principles, everything emerges from within the plane itself, just as musical patterns arise from device interactions rather than predetermined scores.</p>\r\n<h3>Technical Implementation</h3>\r\n<p>The technical foundation builds upon the \"Spatially Distributed Instruments\" project, first showcased at the Forum Event in Seoul 2024. Both projects explore information exchange between autonomous devices, transforming digital data into acoustic manifestations through actuators. The devices serve dual roles - broadcasting and interpreting signals - as they transform digital patterns into sound waves that travel through space.</p>\r\n<p>Core features:</p>\r\n<ul>\r\n<li>Proximity-based interaction between devices</li>\r\n<li>Latency-free distribution of pattern data</li>\r\n<li>Dynamic reconfiguration of device relationships</li>\r\n</ul>\r\n<p>This architecture enables the emergence of complex sonic patterns through continuous transformation and actualization, where digital information manifests in the physical realm through sound.</p>\r\n<p>NOTE: the article will be updated with images&nbsp;</p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2550,
                "name": "max/msp ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2553,
                "name": "plane of immanence",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2554,
                "name": "sonic territories",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 59124,
            "forum_user": {
                "id": 59059,
                "user": 59124,
                "first_name": "Jan Ove",
                "last_name": "Hennig",
                "avatar": "https://forum.ircam.fr/media/avatars/Kabuki_Portrait_-_Processed.jpg",
                "avatar_url": "/media/cache/d0/7f/d07f990b002b5d863a5794680b842936.jpg",
                "biography": "I'm a sound artist and music producer based in Frankfurt, Germany with a passion for sharing knowledge. I've worked as lecturer at the Abbey Road Institute in Frankfurt (with focus on Max/MSP and sound synthesis) and developed video series for Softube (Modular Sound Explorations) and Korg (Sequencing Strategies) among others. In addition to releasing music and performing live with my modular synthesizer I'm also exhibiting large-format audio installations based around my interests in 3d printing, microcontrollers and their interactions with sensors and physical objects.",
                "date_modified": "2025-12-08T20:39:01.777661+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 965,
                        "forum_user": 59059,
                        "date_start": "2024-10-17",
                        "date_end": "2025-10-17",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "kabuki",
            "first_name": "Jan Ove",
            "last_name": "Hennig",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2759,
                    "user": 59124,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "plane-of-emergence",
        "pk": 3220,
        "published": true,
        "publish_date": "2025-01-19T17:11:20+01:00"
    },
    {
        "title": "Exécuter l'algorithme : Open Form Scores Notated and Structured as Visual Programming Interfaces (Partitions de forme ouverte notées et structurées comme des interfaces de programmation visuelle) -  Luciana Perc",
        "description": "Cette étude examine deux partitions pour cordes à structure algorithmique ouverte, s'inspirant du concept d'aléa de Boulez, de l'approche de la composition aléatoire de Cage et de la musique hétéronome de Xenakis, en utilisant des logiciels basés sur des objets des années 1990 pour traiter le hasard dans les partitions instrumentales acoustiques.",
        "content": "<p><a href=\"/media/uploads/bandeaux_articles.png\"></a><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">Pr&eacute;sent&eacute; par : Luciana Perc<br /><a href=\"https://forum.ircam.fr/profile/lucianap/\">Biographie</a></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">Cette &eacute;tude bas&eacute;e sur la pratique examine deux partitions de forme ouverte pour cordes, structur&eacute;es comme des interfaces de programmation visuelle algorithmique. Les deux &oelig;uvres ont &eacute;t&eacute; cr&eacute;&eacute;es en revisitant le concept d'al&eacute;a introduit par Boulez (1964), qui d&eacute;signe des structures superposables articul&eacute;es par des points de jonction, des plates-formes de bifurcation et des &eacute;l&eacute;ments adaptatifs mobiles, ainsi que l'approche de la composition al&eacute;atoire de Cage, ax&eacute;e sur la th&eacute;&acirc;tralit&eacute; de la performance musicale et le contexte acoustique et visuel de l'&eacute;v&eacute;nement musical (1961), et la musique h&eacute;t&eacute;ronome de Xenakis (1971), pr&eacute;sent&eacute;e comme un type de composition stochastique inform&eacute;e par la th&eacute;orie des jeux qui &eacute;tablit les r&egrave;gles d'un jeu comp&eacute;titif entre des interpr&egrave;tes simultan&eacute;s. Les cas de pratique musicale consid&eacute;r&eacute;s ici offrent une r&eacute;ponse compositionnelle &agrave; ces pr&eacute;occupations de recherche expos&eacute;es dans les ann&eacute;es 1960 et 1970 en s'appuyant sur des logiciels bas&eacute;s sur des objets d&eacute;velopp&eacute;s dans les ann&eacute;es 1990 qui ex&eacute;cutent des algorithmes pour traiter la forme musicale (Open Music, Max/MSP, Pure Data) afin d'aborder l'utilisation du hasard dans les partitions instrumentales acoustiques.</p>\r\n<p style=\"text-align: justify;\">Cette &eacute;tude porte sur deux &oelig;uvres compos&eacute;es par l'auteur. A Performing Monkey Game (Perc 2022) pour trio &agrave; cordes examine les commentaires de Pierre Boulez sur John Cage, qui l'avait d&eacute;peint comme un \"singe performant\", et s'inspire d'exp&eacute;riences men&eacute;es dans les ann&eacute;es 1960, au cours desquelles des gorilles ont &eacute;t&eacute; initi&eacute;s &agrave; la langue des signes am&eacute;ricaine. Les gorilles ont appris &agrave; comprendre et &agrave; r&eacute;pondre &agrave; la fois aux signes et aux mots parl&eacute;s, ce qui signifie qu'ils pouvaient d&eacute;coder les &eacute;l&eacute;ments sonores du langage. La partition offre une structure permettant aux interpr&egrave;tes de naviguer dans une forme al&eacute;atoire de jeu en prenant des d&eacute;cisions audibles et en &eacute;coutant les choix et les modes de jeu de chacun. Diffraction (Perc 2023) pour quatuor &agrave; cordes invite les interpr&egrave;tes &agrave; alterner le jeu de leur instrument avec le remontage et le tic-tac de m&eacute;tronomes m&eacute;caniques comme rep&egrave;re auditif pour articuler collectivement la forme globale de la performance. Cette &oelig;uvre s'inspire de la fascination de Ligetis (1983) pour les machines d&eacute;fectueuses afin d'apporter une variable al&eacute;atoire suppl&eacute;mentaire qui cr&eacute;e une tension po&eacute;tique avec les choix des interpr&egrave;tes. Les deux &oelig;uvres explorent les notions d'intra-action (plut&ocirc;t que d'interaction) permettant ce que j'introduis comme une nouvelle &eacute;coute mat&eacute;rialiste, v&eacute;cue comme un exercice philosophique diffractif sugg&eacute;r&eacute; par Barad (2007). Cette recherche &eacute;largit l'application du technof&eacute;minisme &agrave; la performance musicale, en se concentrant sur les interactions algorithmiques, contribuant &agrave; la fois &agrave; l'utilisation du hasard dans la composition musicale et au d&eacute;veloppement de dispositifs de participation humaine par le biais d'algorithmes bas&eacute;s sur l'ordinateur.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong><span>&nbsp;</span></p>",
        "topics": [
            {
                "id": 1758,
                "name": "algorithmic composition",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1766,
                "name": "New musical materialisms",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1765,
                "name": "open-form score",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1764,
                "name": "stochastic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 23499,
            "forum_user": {
                "id": 23473,
                "user": 23499,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/D92AF086-E498-472C-8134-DA2B98EDDE73-scaled.jpeg",
                "avatar_url": "/media/cache/14/71/14712758ca48374a20fe2def65134c88.jpg",
                "biography": "Luciana is a composer, performer and researcher creatively exploring technologies, such as live and fixed electronics and video, as well as the boundaries between art forms, namely instrumental theatre, intermedial performance and sound art. Her work has been recently presented at Line Upon Line’s Winter Composer Festival (Austin), Festival MÀD (Bordeaux) by Proxima Centauri, NIME (Utrecht), IRCAM’s Forum Workshops, Cite des Arts Paris, Darmstädter Ferienkurse, Tête-à-Tête: The Opera festival (London), Music of the Americas (NYC) by Ensemble 2e2m and San Diego Opera’s OperaHack 3.0 (US), Darmstädter Ferienkurse Open Space, Musikfestival Bern, Acht Brücken Festival Cologne, Playtime Festival, Gare du Nord Basel, Société de Musique Contemporaine Lausanne, three editions of CICTEM, and Centro Nacional de la Música. A fellow of the HEA, her teaching activity recently took place at London College of Communication’s (UAL) Guest Lectures Series, Latin Elephant’s Community Music Ensemble, the School of Creative Technologies (UoP), the Outreach department of the Festival d’Aix (France), Trinity Laban’s Learning and Participation department and Royal Central School of Speech and Drama.",
                "date_modified": "2026-02-03T17:51:24.708397+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lucianap",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "performing-the-algorithm-open-form-scores-notated-and-structured-as-visual-programming-interfaces",
        "pk": 2726,
        "published": true,
        "publish_date": "2024-02-13T14:43:50+01:00"
    },
    {
        "title": "ISMM News by Frederic Bevilacqua, Diemo Schwarz, Riccardo Borghesi, Benjamin Matuszewski, Jérôme Nika",
        "description": "",
        "content": "<p><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e9eac02b75ab94d35effe245c84ccbb9.png\" /></p>\r\n<p>We will present new features of the MuBu for Max framework for multimodal analysis of sound and motion, interactive sound synthesis and machine learning, the CataRT and SKataRT corpus-based synthesis tools for Max and Ableton Live, the Gestural Sound Toolkit for the prototyping of gesture&ndash;sound interaction scenarios, and different versions, improvements and documention of the JavaScript ecosystem dedicated to distributed systems (soundworks, dotpi-tools, node-web-audio-api).</p>",
        "topics": [
            {
                "id": 60,
                "name": "Catart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1865,
                "name": "catart-mubu",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4269,
                "name": "Dots",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 101,
                "name": "Gesture",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 639,
                "name": "ISMM Team",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 752,
                "name": "javascript",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 61,
                "name": "Mubu",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 632,
                "name": "Skatart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 36,
            "forum_user": {
                "id": 36,
                "user": 36,
                "first_name": "Diemo",
                "last_name": "Schwarz",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9bf9105c2fbdb55023f9437ac99a6630?s=120&d=retro",
                "biography": "Diemo Schwarz is a researcher at IRCAM, and a musician and creative programmer. He performs on his own digital musical instrument based on his CataRT open source software, exploring different collections of sound with the help of gestural controllers that reconquer musical expressiveness and physicality for the digital instrument, bringing back the immediacy of embodied musical interaction to the rich sound worlds of digital sound processing and synthesis.\nHe interprets and performs improvised electronic music as member of the ONCEIM improvisers orchestra, ensemble Ikosikaihenagone, and various other musicians, and he composes for dance and performance, video, and installation.\nHis scientific research on sound analysis/synthesis and gestural control of interaction with music is the basis of his artistic work, and allows to bring advanced and fun musical interaction to expert musicians and the general public.\nIn 2017 he was DAAD Edgar-Varèse guest professor for computer music at TU Berlin, and in 2022 artist in residence in the Arts, Sciences, Societies fellowship program of IMéRA institute of advanced studies, Aix–Marseille Université.",
                "date_modified": "2026-02-24T12:21:32.536216+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 397,
                        "forum_user": 36,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-19",
                        "type": 0,
                        "keys": [
                            {
                                "id": 7,
                                "membership": 397
                            },
                            {
                                "id": 9,
                                "membership": 397
                            },
                            {
                                "id": 13,
                                "membership": 397
                            },
                            {
                                "id": 21,
                                "membership": 397
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "schwarz",
            "first_name": "Diemo",
            "last_name": "Schwarz",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 329,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 257,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 496,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 38,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 1045,
                    "user": 36,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 21,
                    "emitter_object_id": 299,
                    "user": 36,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "ismm-news-by-frederic-bevilacqua-diemo-schwarz-riccardo-borghesi-benjamin-matuszewski-jerome-nika-1",
        "pk": 4387,
        "published": true,
        "publish_date": "2026-02-18T18:51:19+01:00"
    },
    {
        "title": "坎Kǎn by Yellow Wasabi",
        "description": "Kǎn is an immersive experience recall the philosophy of water, transport by traditional instrument Guzheng and natural soundscapes, interact with body movement. This project is developped during residencies in GRAME and GMEM.",
        "content": "<p><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></p>\r\n<p>Kǎn is the symbol for water in Feng Shui, the Chinese philosophy of the relationship between man and nature. Water is a component of the human being, and the world is connected by it. Nowadays, data is like water connecting the world.&nbsp;</p>\r\n<p><img alt=\"Credit: Yana Strakaza\" src=\"https://forum.ircam.fr/media/uploads/user/12ae954f79d20dfb62fdbe100a36b38b.jpg\" /></p>\r\n<p>Yellow Wasabi proposes to use transdisciplinary methods, arts and sciences, to reconnect human and water through an interactive audiovisual experience realized by data. Yellow Wasabi plays the Guzheng, the traditional Chinese harp, amplified in electroacoustic program, and reproduces the gesture of water in Qigong, captured by an intelligent motion tracker. The resonant sound is arranged by an underwater recording in a musical composition process under Max/MSP.</p>\r\n<p>Yellow Wasabi generates an interactive 3D neural network projection from the soundscape of the Guzheng and the water, creating an immersive space. The audience will be able to connect to the water via OSC interaction and revisit their relationship with it. The public will be aware that their behavior affects the water, which will be reflected in sound and musical variations. Only with an inclusive spirit can the relationship between man and water achieve harmony.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/b109f16c00826d62e4a334b81ef19a8a.jpg\" /></p>\r\n<p>For revisiting the relationship between the instrument, the musician and the audience, Yellow Wasabi seeks further collaboration on connecting musician and audience via the Guzheng, using electronic modules capable of measuring physiological data such as heartbeat, body temperature or respiration. In this way, transmitters and receivers can interact in real time, giving physiological feedback when they hear the music.</p>\r\n<p>&nbsp;</p>",
        "topics": [
            {
                "id": 4229,
                "name": "Audiovisual Performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4323,
                "name": "Chinese instrument",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4231,
                "name": "Immersive Art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 158667,
            "forum_user": {
                "id": 158437,
                "user": 158667,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/favicon_Zb2NxnC.jpg",
                "avatar_url": "/media/cache/bb/46/bb46e9aa405a7b51693db0bbea75fc6b.jpg",
                "biography": null,
                "date_modified": "2026-02-27T15:57:35.074746+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "yellowwasabi",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "kan-by-yellow-wasabi",
        "pk": 4423,
        "published": true,
        "publish_date": "2026-02-24T16:26:32+01:00"
    },
    {
        "title": "Designing Dicy2 music generation agents through artistic collaborations",
        "description": "Upcoming release of Dicy2 (musical interactions with generative agents) for Max & Ableton Live ! A flashback in the form of a zapping of the artistic collaborations that have given birth to it over the last 5 years at IRCAM.",
        "content": "<p>Upcoming release of Dicy2 (musical interactions with generative agents) <a href=\"/projects/detail/dicy2/\">for Max</a> &amp; <a href=\"/projects/detail/dicy2-for-live/\">Ableton Live</a> ! A flashback in the form of a zapping of the artistic collaborations that have given birth to it over the last 5 years at&nbsp;<a href=\"https://www.linkedin.com/company/ircam/\">IRCAM</a>. <br><br><a href=\"https://www.youtube.com/watch?v=Yt_JS1HAuS4\">See the video on Youtube</a></p>\n<p>&lt;iframe title=\"YouTube video player\" src=\"https://www.youtube.com/embed/Yt_JS1HAuS4\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"&gt;&lt;/iframe&gt;</p>\n<p>Dicy2 integrates scientific and musical research results accumulated through productions and experiments with&nbsp;R&eacute;mi Fox, Steve Lehman,&nbsp;Orchestre National de Jazz, Alexandros Markeas and&nbsp;Poletti Manuel, Pascal Dusapin and&nbsp;Thierry Coduys,&nbsp;Le Fresnoy - Studio national des arts contemporains,&nbsp;Vir Andres Hera,&nbsp;Ga&euml;tan Robillard, Beno&icirc;t Delbecq, Jozef Dumoulin, Ashley Slater, Herv&eacute; Sellin, Rodolphe Burger,&nbsp;Marta Gentilucci... After having evolved research prototypes crystallising the contributions of these various projects for several years, a collaborative work carried out during the year 2022 has led to the finalisation of a release of Dicy2 as a plugin for Ableton Live and a library for Max.</p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 955,
                "name": "Computer Assisted Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 313,
                "name": "Machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1035,
                "name": "Music generation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 129,
                "name": "Real time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18367,
            "forum_user": {
                "id": 18360,
                "user": 18367,
                "first_name": "Jerome",
                "last_name": "Nika",
                "avatar": "https://forum.ircam.fr/media/avatars/jerome_nika-466x233.jpg",
                "avatar_url": "/media/cache/f2/20/f220de2bc73567220b06bd17faf4baa1.jpg",
                "biography": "As a researcher at Ircam, Jérôme Nika’s work focuses on how to model, learn, and navigate an “artificial musical memory” in creative contexts. In opposition to a “replacement approach” where AI would substitute for human, this research aims at designing novel creative practices involving a certain level of symbolic abstraction such as “interpreting / improvising the intentions” and “composing the narration“. \nNumerous productions have the resulting technologies: Roulette, NYC; Onassis Center, Athens; Ars Electronica Festival, Linz; Frankfurter Positionen festival; Annenberg Center, Philadelphia; Bimhuis, Amsterdam; French embassy Washington DC; Maison de la Radio, Centre Pompidou, Collège de France, LeCentquatre, Paris; Montreux Jazz Festival; Montreal Jazz Festival etc.\nAs a musician, computer music designer, or scientific advisor, he is involved in numerous musical productions and artistic collaborations, particularly in improvised music (Steve Lehman, Orchestre National de Jazz, Bernard Lubat, Benoît Delbecq, Rémi Fox), contemporary music (Pascal Dusapin, Alexandros Markeas, Ensemble Modern, Marta Gentilucci), and contemporary art (Le Fresnoy).",
                "date_modified": "2026-02-23T11:56:29.425335+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 644,
                        "forum_user": 18360,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-25",
                        "type": 0,
                        "keys": [
                            {
                                "id": 448,
                                "membership": 644
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "jnika",
            "first_name": "Jerome",
            "last_name": "Nika",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2757,
                    "user": 18367,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "designing-dicy2-music-generation-tools-through-artistic-collaborations",
        "pk": 1995,
        "published": false,
        "publish_date": "2022-12-08T19:17:33.780091+01:00"
    },
    {
        "title": "The Manifesto of New-Art III",
        "description": "For the Beginning see at The Manifesto of New-Art I",
        "content": "<p><img src=\"/media/uploads/user/e71c533469aea53aa467f597d01556d8.jpg\" alt=\"\" width=\"344\" height=\"277\" /></p>\n<p>&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So most people known more or less about this System. The System of the Cycle of Cognition. They known about the attacks of Manipulation. Manipulation we are everyday exposed.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But what&rsquo;s the medicine in the right Hand can be the poison in the wrong Hand. So the Way of Recapture the own Mind is going to become a Trap Door. A Trap Door for most of the Esoteric Societies. And even one of this Societies are Scientology.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Trap Doors for Scientology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Total Freedom Trap:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.xenu.net/archive/books/ttft\">http://www.xenu.net/archive/books/ttft</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Training Routines:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.cs.cmu.edu/~dst/Secrets/TR/critique.html\">http://www.cs.cmu.edu/~dst/Secrets/TR/critique.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Scientology Lyrics:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://genius.com/Rick-ross-scientology-lyrics\">https://genius.com/Rick-ross-scientology-lyrics</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Medicine can be a Poison:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">All Medicine are Poisons:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://sciencebasedmedicine.org/all-medicines-are-poison\">https://sciencebasedmedicine.org/all-medicines-are-poison</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Lei of Scientology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://narcononlies.wordpress.com/2014/07/11/lie-3-all-drugs-are-poisons\">https://narcononlies.wordpress.com/2014/07/11/lie-3-all-drugs-are-poisons</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Answers:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.answers.com/Q/Are_some_medicines_poisonous\">https://www.answers.com/Q/Are_some_medicines_poisonous</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">To save you for Manipulation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">8 Simple Ways:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://brightside.me/article/eight-simple-ways-to-avoid-being-manipulated-11405\">https://brightside.me/article/eight-simple-ways-to-avoid-being-manipulated-11405</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to protect yourself:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://livepurposefullynow.com/recognize-and-protect-yourself-from-manipulation\">https://livepurposefullynow.com/recognize-and-protect-yourself-from-manipulation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Trap Doors:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://psychologenie.com/manipulation-techniques\">https://psychologenie.com/manipulation-techniques</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Trap-door._2.jpg\">https://commons.wikimedia.org/wiki/File:Trap-door._2.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/48d7c2a4ed3513a03c89d79ce5b4f202.jpg\" alt=\"\" width=\"344\" height=\"258\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So the Beginning of the way to break out of the Cycle of Cognition is to understand this Process of Cognition.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But not Political actionism is the way out.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Nor is it the way to do any Auditing.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Nor to make any Experience with Drugs.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Nor the down singing of any chantey Thesis from a Saatsang Leader.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Political Actionism against secret Society:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">First Way:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.newdawnmagazine.com/articles/political-secret-societies-the-hidden-paths-of-power\">https://www.newdawnmagazine.com/articles/political-secret-societies-the-hidden-paths-of-power</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s real secret Society:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.documentarytube.com/articles/top-5-secret-societies-with-real-power-in-world-politics\">http://www.documentarytube.com/articles/top-5-secret-societies-with-real-power-in-world-politics</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Inside the Secret:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.washingtonpost.com/opinions/a-secret-society-dedicated-to-making-trump-look-bad/2018/01/24/d0ba9d0e-0156-11e8-9d31-d72cf78dbeee_story.html\">https://www.washingtonpost.com/opinions/a-secret-society-dedicated-to-making-trump-look-bad/2018/01/24/d0ba9d0e-0156-11e8-9d31-d72cf78dbeee_story.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Auditing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Old sense of Auditing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://asq.org/quality-resources/auditing\">https://asq.org/quality-resources/auditing</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Scientology about Auditing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.scientology.org/what-is-scientology/the-practice-of-scientology/auditing-in-scientology.html\">https://www.scientology.org/what-is-scientology/the-practice-of-scientology/auditing-in-scientology.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Example of Abuse of Auditing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.huffpost.com/entry/scientology-auditing_n_6971680?guccounter=1&amp;guce_referrer=aHR0cHM6Ly9kZS5zZWFyY2gueWFob28uY29tLw&amp;guce_referrer_sig=AQAAAA5WlSFHbDoBqvJsY2tdUA0dtyyWlBJBcX5_Z6PAbOz4xEyhZG0mN8_mhgCsyPoezr4C-OtEe4zT4oEb8zskUMLCbgx3DYYGRIp0W_3QW3DeBre34r-yjhGNCI567hLBEXhLPdZyzTlvzvN_-oBaN3oCwaTcKU8kg6lR26VZRSKn\">https://www.huffpost.com/entry/scientology-auditing_n_6971680?guccounter=1&amp;guce_referrer=aHR0cHM6Ly9kZS5zZWFyY2gueWFob28uY29tLw&amp;guce_referrer_sig=AQAAAA5WlSFHbDoBqvJsY2tdUA0dtyyWlBJBcX5_Z6PAbOz4xEyhZG0mN8_mhgCsyPoezr4C-OtEe4zT4oEb8zskUMLCbgx3DYYGRIp0W_3QW3DeBre34r-yjhGNCI567hLBEXhLPdZyzTlvzvN_-oBaN3oCwaTcKU8kg6lR26VZRSKn</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Experiments with Drugs:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Information&rsquo;s about Drug Experiments:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.sciencedirect.com/science/article/pii/B9781483198712500074\">https://www.sciencedirect.com/science/article/pii/B9781483198712500074</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Teenager Experiments with Drugs:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://drugabuse.com/11-real-reasons-teenagers-experiment-drugs\">https://drugabuse.com/11-real-reasons-teenagers-experiment-drugs</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Clinical Experiments with Drugs:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/6657037\">https://www.ncbi.nlm.nih.gov/pubmed/6657037</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Satsang:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Yogapedia about Satsang:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.yogapedia.com/definition/4997/satsang\">https://www.yogapedia.com/definition/4997/satsang</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Endless Satsang:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://endless-satsang.com/nondual-advaita-satsang.htm\">https://endless-satsang.com/nondual-advaita-satsang.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">BeingSatsang.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.beingsatsang.com/what-is-satsang\">https://www.beingsatsang.com/what-is-satsang</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Satsang_Vihar,_Ambicapatty.jpg\">https://commons.wikimedia.org/wiki/File:Satsang_Vihar,_Ambicapatty.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">--------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/de2015556564d905ddbbfa097044c388.jpg\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But Art? Art can been the Solution we are looking for.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art is now ready to take on the challenge.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art will Conquer Back our Mind.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But this are only the Force of Art, when Art is in this Sense in which Art should be.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Its the Sense of his Essence and Nature.</p>\n<ol>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art must be the playing with the Possible.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art must be the playing with the Boundary&rsquo;s of Being.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art must be a Process of equal and liberty intelligent Entity&rsquo;s.</p>\n</li>\n</ol>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Methodology of Art Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/304996838_Art_Therapy_-_A_Review_of_Methodology\">https://www.researchgate.net/publication/304996838_Art_Therapy_-_A_Review_of_Methodology</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Creativity in Art Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/315829570_Creativity_in_Art_Therapy\">https://www.researchgate.net/publication/315829570_Creativity_in_Art_Therapy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Power of Art Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.alustforlife.com/tools/mental-health/the-healing-power-of-art-therapy\">https://www.alustforlife.com/tools/mental-health/the-healing-power-of-art-therapy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art for Freedom:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art for Freedom:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.dw.com/de/themen/art-of-freedom-freedom-of-art/s-32538\">https://www.dw.com/de/themen/art-of-freedom-freedom-of-art/s-32538</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art Works for Freedom:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.artworksforfreedom.org/\">https://www.artworksforfreedom.org</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">On Facebook:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.facebook.com/ArtForFreedom\">https://www.facebook.com/ArtForFreedom</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art for the Freedom of Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Freedom on Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://freedomofmind.org.uk/events/category/art\">https://freedomofmind.org.uk/events/category/art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Asia Contemporary Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.asiacontemporaryart.com/artists/artwork/FreedomofMindMixedMe/en\">https://www.asiacontemporaryart.com/artists/artwork/FreedomofMindMixedMe/en</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Examples:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.pinterest.de/Impalm/freedom-of-mind\">https://www.pinterest.de/Impalm/freedom-of-mind</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s True Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Meaning of Life:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophicalpointofview.wordpress.com/2016/10/20/true-art-2\">https://philosophicalpointofview.wordpress.com/2016/10/20/true-art-2</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Humanists Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://humanist-art.org/sof-whats-true\">https://humanist-art.org/sof-whats-true</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Entertainment or True Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blog.sonicbids.com/whats-the-difference-between-entertainment-and-true-art\">https://blog.sonicbids.com/whats-the-difference-between-entertainment-and-true-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Possibility&rsquo;s of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art of Possibility:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://decidingfactor.us/services/strategy/art-of-possibility\">https://decidingfactor.us/services/strategy/art-of-possibility</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lyrics with the Term:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.lyrics.com/lyrics/possibility\">https://www.lyrics.com/lyrics/possibility</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Synonyms of Possibility:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thesaurus.com/browse/possibilities\">https://www.thesaurus.com/browse/possibilities</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Freedom_Monument,_Trujillo.jpg\">https://commons.wikimedia.org/wiki/File:Freedom_Monument,_Trujillo.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/aac6fe778f08683bf79d0aa081273180.jpg\" alt=\"\" width=\"344\" height=\"441\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So lets make a short look at the History of modern Art.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Modern Art begins with the Development of Impressionism and Expressionism.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Let&rsquo;s explain this at a Child.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The first are the Child who hearing his parents speaking.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The last are the Child who tries to spoke at his self.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The History of modern Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art History:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.arthistory.net/modern-art\">http://www.arthistory.net/modern-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lexicon Entry:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/modern-art-to-1945-2080464\">https://www.britannica.com/topic/modern-art-to-1945-2080464</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Development of Modern Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theartstory.org/definition/modern-art/history-and-concepts\">https://www.theartstory.org/definition/modern-art/history-and-concepts</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Impressionism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lexicon Entry:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/art/Impressionism-art\">https://www.britannica.com/art/Impressionism-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Artsy.net:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.artsy.net/article/artsy-editorial-monet-impressionists-paved-way-modern-art\">https://www.artsy.net/article/artsy-editorial-monet-impressionists-paved-way-modern-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Legacy of Impressionism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://mymodernmet.com/what-is-impressionism-definition/\">https://mymodernmet.com/what-is-impressionism-definition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Expressionism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lexicon Entry:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/art/Expressionism\">https://www.britannica.com/art/Expressionism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art Movements:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.artmovements.co.uk/expressionism.htm\">http://www.artmovements.co.uk/expressionism.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art Term:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.tate.org.uk/art/art-terms/e/expressionism\">https://www.tate.org.uk/art/art-terms/e/expressionism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Learning by early Children:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Learning:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://eacea.ec.europa.eu/national-policies/eurydice/content/teaching-and-learning-early-childhood-education-and-care-8_en\">https://eacea.ec.europa.eu/national-policies/eurydice/content/teaching-and-learning-early-childhood-education-and-care-8_en</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Early Childhood Learning:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://edc.org/body-work/early-childhood-development-and-learning\">https://edc.org/body-work/early-childhood-development-and-learning</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Theory of Early Learning:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://deansforimpact.org/wp-content/uploads/2017/01/The_Science_of_Early_Learning.pdf\">https://deansforimpact.org/wp-content/uploads/2017/01/The_Science_of_Early_Learning.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Nepalese_Children_in_Tadapani,_Ghandruk-Nepal-4428.jpg\">https://commons.wikimedia.org/wiki/File:Nepalese_Children_in_Tadapani,_Ghandruk-Nepal-4428.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/5302d62ce5a180739d53bf77ea66fee3.jpg\" alt=\"\" width=\"344\" height=\"359\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">The first are the Consequence out of whatever Art have been.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Try to become Power about the Relation&rsquo;s of Nature.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Try to understand Nature by doing it like God. By copy the act of creation.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Ritual Act of Prehistorical Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Rock Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/280624715_Rock_Art_Religion_and_Ritual_In_The_Archaeology_of_ritual_and_religion_ed_Tim_Insoll_Oxford_2011\">https://www.researchgate.net/publication/280624715_Rock_Art_Religion_and_Ritual_In_The_Archaeology_of_ritual_and_religion_ed_Tim_Insoll_Oxford_2011</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Rock Art II:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/226340870_Rock_Art_and_Ritual_An_Archaeological_Analysis_of_Rock_Art_in_Arid_Central_Australia\">https://www.researchgate.net/publication/226340870_Rock_Art_and_Ritual_An_Archaeological_Analysis_of_Rock_Art_in_Arid_Central_Australia</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">prehistoric religion:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/prehistoric-religion\">https://www.britannica.com/topic/prehistoric-religion</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Artist are copy the creation by Good:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Mimesis:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/art/mimesis\">https://www.britannica.com/art/mimesis</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Simularcum:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Simulacrum\">https://en.wikipedia.org/wiki/Simulacrum</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Principle of Creation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/3847705?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/3847705?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg\">https://commons.wikimedia.org/wiki/File:Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/8f122f5a90dcd101c6a918705b6d946c.jpg\" alt=\"\" width=\"344\" height=\"457\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">The Second are a Act of Emancipation. The Human is just in the Force of Photography. It is in the Force to copy the World. He can copy the World down in his Picture. And the Human of the Future is in the Force of Computational Processing&rsquo;s. He is in the Force of Processing Information and Data, with the Power of his universal Slave. And this Slave is the Computer. The Computer begins in these days to mastery to domination of the World . So the Human started to sketch down his Visions of the Diversity of the possible. Short: He begins to become God for his self.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Emancipation of modern Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Bauhaus-ReUse:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.bauhaus-reuse.de/index.php/content/modern-emancipation\">http://www.bauhaus-reuse.de/index.php/content/modern-emancipation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art and Emancipation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://people.fas.harvard.edu/~kliger/papers/ngc_kliger.pdf\">https://people.fas.harvard.edu/~kliger/papers/ngc_kliger.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The legitimacy of modern Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://pubs.socialistreviewindex.org.uk/isj80/art.htm\">http://pubs.socialistreviewindex.org.uk/isj80/art.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Photo-realism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Lexicon Entry:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/art/Photo-realism\">https://www.britannica.com/art/Photo-realism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.dictionary.com/browse/photorealism\">https://www.dictionary.com/browse/photorealism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Point of Photo-realism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://lachri.com/whats-the-point-of-photorealism/\">https://lachri.com/whats-the-point-of-photorealism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Computational Processing&rsquo;s as Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.interaliamag.org/interviews/ernest-a-edmonds\">https://www.interaliamag.org/interviews/ernest-a-edmonds</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Processing a Framework for Computational Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://processing.org/books\">https://processing.org/books</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Theory of Computational Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://hrenatoh.net/curso/processing/processing_creative_coding.pdf\">http://hrenatoh.net/curso/processing/processing_creative_coding.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Worldview of a Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Computational as Paradigm of Worldview:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/332568219_From_Computer_Science_to_the_Informational_Worldview_Philosophical_Interpretations_of_Some_Computer_Science_Concepts\">https://www.researchgate.net/publication/332568219_From_Computer_Science_to_the_Informational_Worldview_Philosophical_Interpretations_of_Some_Computer_Science_Concepts</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction to Computational Semantic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1162/handouts/Computational-Semantics.pdf\">https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1162/handouts/Computational-Semantics.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Book at it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.let.rug.nl/bos/comsem/book1.html\">http://www.let.rug.nl/bos/comsem/book1.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Human at the Boundary&rsquo;s of Possibility:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Human Body as Boundary:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/1398450?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/1398450?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Boundary Expression in AI:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cs.cmu.edu/~efros/courses/LBMV09/presentations/GlobalPb.pdf\">https://www.cs.cmu.edu/~efros/courses/LBMV09/presentations/GlobalPb.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Boundary of Truth:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theguardian.com/media/2019/sep/19/why-cant-we-agree-on-whats-true-anymore\">https://www.theguardian.com/media/2019/sep/19/why-cant-we-agree-on-whats-true-anymore</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Arlequin_modern_art.jpg\">https://commons.wikimedia.org/wiki/File:Arlequin_modern_art.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/c86eb3b2f77c3cfebe0f5169c97cc929.jpg\" alt=\"\" width=\"344\" height=\"344\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">Let&rsquo;s recapitulate this as the Process a Child learn to Speech. So can Art be a Language. But Art is a secondary Language . We will learn or better conquers this Language more in adult age&rsquo;s.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So lets clearer see what&rsquo;s this Development of Art.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">In the first the Children has to try to listen to his surroundings. It has to try to listen which different Sounds it can hear. It has to try to become clear which Semantic which Sounds can have. So it learn to understand his Surrounding Sounds. It learns in this Kind to understand his Surround by these Sounds.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">In Art and Music it has to accept his reactive Mind. And begins to play with him.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to learn Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">ArtFactory:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.artyfactory.com/\">https://www.artyfactory.com</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Fundamentals of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://conceptartempire.com/what-are-the-fundamentals\">https://conceptartempire.com/what-are-the-fundamentals</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The best Way to learn Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.artsyshark.com/2014/04/17/whats-the-best-way-to-learn-art\">https://www.artsyshark.com/2014/04/17/whats-the-best-way-to-learn-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to become a Artist:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">WikiHow.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.wikihow.com/Become-an-Artist\">https://www.wikihow.com/Become-an-Artist</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to study Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://study.com/becoming_an_artist.html\">https://study.com/becoming_an_artist.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to become Famous:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.wikihow.com/Become-a-Famous-Artist\">https://www.wikihow.com/Become-a-Famous-Artist</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s a Soundscape:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article as Introduction:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://folklife.si.edu/talkstory/the-sound-of-life-what-is-a-soundscape\">https://folklife.si.edu/talkstory/the-sound-of-life-what-is-a-soundscape</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Sound-Design:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://sound.stackexchange.com/questions/11460/what-is-a-soundscape\">https://sound.stackexchange.com/questions/11460/what-is-a-soundscape</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of Soundscape:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thefreedictionary.com/soundscape\">https://www.thefreedictionary.com/soundscape</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Semantic of Sounds:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to semantically define a Sound:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://assets.madebydelta.com/assets/docs/share/Akustik/The_Semantic_Space_of_Sounds.pdf\">https://assets.madebydelta.com/assets/docs/share/Akustik/The_Semantic_Space_of_Sounds.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Semantic of Product Sounds:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.ijdesign.org/index.php/IJDesign/article/view/957/473\">http://www.ijdesign.org/index.php/IJDesign/article/view/957/473</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Semantic of Sounds in the Brain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://artblot.wordpress.com/2013/08/13/the-semantics-of-sound/\">https://artblot.wordpress.com/2013/08/13/the-semantics-of-sound</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The unconsciousness Mind and Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music for the unconsciousness Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://bigthink.com/ideafeed/understanding-how-music-stimulates-the-unconscious-mind\">https://bigthink.com/ideafeed/understanding-how-music-stimulates-the-unconscious-mind</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music and Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.sfxmachine.com/docs/musicandconsciousness.html\">http://www.sfxmachine.com/docs/musicandconsciousness.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Music and Behavior:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://rampages.us/timminsc/subconscious-impact\">https://rampages.us/timminsc/subconscious-impact</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:The_Sounds_of_Earth_Record_Cover_-_GPN-2000-001978.jpg\">https://commons.wikimedia.org/wiki/File:The_Sounds_of_Earth_Record_Cover_-_GPN-2000-001978.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">___</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">It accept the Relation between Actions and Reactions. At the first it only see the coming and going of Perceptions. But than it begins to see the Relation between them. So we will better say it is not a reactive but a statistical Mind. It begins to decide between Seme and the corresponding Processes in Nature.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">___</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/34d5528588d7c89bfd155cb8defe51b4.png\" alt=\"\" width=\"344\" height=\"188\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">In the Second the Children begins to spoke as a Act of it self. It try to pour in his Ideas in the Semantic System of Sounds. And try so to speak. It begins to create this own Seme&rsquo;s and listen to the Reactions. It listen to the Reactions of the Surrouding of his Action&rsquo;s. This is the Source for the Form of every Sentence of Relation in Science. In which Science is the making of controlled Experience. And this Way it explores the Statistically Relation. Which are the Relation&rsquo;s between the World as Source of Action&rsquo;s and the following Reaction&rsquo;s. And Reaction&rsquo;s are the Following Changes of the World. In which the World is the Set of Things it can listen to.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Semantic Coding:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">technical Semantic Code:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://boagworld.com/dev/semantic-code-what-why-how\">https://boagworld.com/dev/semantic-code-what-why-how</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Psychological Semantic Code:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://dictionary.apa.org/semantic-code\">https://dictionary.apa.org/semantic-code</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Guide to clear Code:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://medium.com/@Moin_Gani/semantic-code-and-indentation-a-beginners-guide-to-clean-coding-2fff9aaa901b\">https://medium.com/@Moin_Gani/semantic-code-and-indentation-a-beginners-guide-to-clean-coding-2fff9aaa901b</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Seme and Reaction:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Semantic Relations:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.sciencedirect.com/topics/computer-science/semantic-relation\">https://www.sciencedirect.com/topics/computer-science/semantic-relation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition on Semantic Relation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thefreedictionary.com/semantic+relation\">https://www.thefreedictionary.com/semantic+relation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction to Semantic Relation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://sameersingh.org/courses/statnlp/wi17/slides/lecture-0214-semantic-relations.pdf\">http://sameersingh.org/courses/statnlp/wi17/slides/lecture-0214-semantic-relations.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Form of Scientific Sentence:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Most used Sentences:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.editage.com/insights/what-kind-of-sentences-are-preferred-in-scientific-writing-long-or-short\">https://www.editage.com/insights/what-kind-of-sentences-are-preferred-in-scientific-writing-long-or-short</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Guide to Scientific Writing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://students.uu.nl/sites/default/files/ge0-aw-guide-for-scientific-writing-2016.pdf\">https://students.uu.nl/sites/default/files/ge0-aw-guide-for-scientific-writing-2016.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Don&rsquo;t go in Scientific Writing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.editage.com/insights/scientific-writing-avoid-starting-sentences-with-a-number-or-abbreviation\">https://www.editage.com/insights/scientific-writing-avoid-starting-sentences-with-a-number-or-abbreviation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Science and Experiment:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Scientific Method:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.khanacademy.org/science/high-school-biology/hs-biology-foundations/hs-biology-and-the-scientific-method/a/the-science-of-biology\">https://www.khanacademy.org/science/high-school-biology/hs-biology-foundations/hs-biology-and-the-scientific-method/a/the-science-of-biology</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lexicon Entry:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/science/scientific-method\">https://www.britannica.com/science/scientific-method</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article about it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.sciencebuddies.org/science-fair-projects/science-fair/steps-of-the-scientific-method\">https://www.sciencebuddies.org/science-fair-projects/science-fair/steps-of-the-scientific-method</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Action and Reaction:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction to Mechanic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://books.google.de/books?id=P1kCtNr-pJsC&amp;printsec=frontcover&amp;redir_esc=y#v=onepage&amp;q&amp;f=false\">https://books.google.de/books?id=P1kCtNr-pJsC&amp;printsec=frontcover&amp;redir_esc=y#v=onepage&amp;q&amp;f=false</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Set of Lexical Entrys:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/science/law-of-action-and-reaction\">https://www.britannica.com/science/law-of-action-and-reaction</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Principe of Scientific:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/science/principles-of-physical-science/Laws-of-motion\">https://www.britannica.com/science/principles-of-physical-science/Laws-of-motion</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The World of Experience:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Short Theory of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1701_1\">https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1701_1</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The creative Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nature.com/articles/s41599-017-0024-1.pdf?origin=ppub\">https://www.nature.com/articles/s41599-017-0024-1.pdf?origin=ppub</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Science of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(16)30204-2\">https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(16)30204-2</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/969454f4e5059d68de008d2728c698f2.jpg\" alt=\"\" width=\"344\" height=\"281\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But this Form can be treated as a Engram. This is a Kind of Engram which is called the Cycle of Cognition. Which is us given by Evolution and with the Birth as Human. And this Engram leads us to some philosophical Trouble. We believe everything must have such a Relation. And this Relation&rsquo;s are in a special Form been our Nature Regularities. And even them are Regularities and not Laws. And we think and this is a pitfall that it gives Human Liberty. Short all rights, are interventions of stimulation&rsquo;s to train us to Act in a special Way. We are subjects of Law.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Engram on Action and Reaction:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article in Relation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6527697/pdf/41467_2019_Article_9960.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6527697/pdf/41467_2019_Article_9960.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Action and Reaction in Science:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.physicsclassroom.com/class/momentum/Lesson-2/The-Law-of-Action-Reaction-(Revisited\">https://www.physicsclassroom.com/class/momentum/Lesson-2/The-Law-of-Action-Reaction-(Revisited</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Physic and Mind &hellip; :</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://buddhism.stackexchange.com/questions/8577/if-physical-action-causes-reaction-doesnt-mental-action-cause-reaction-too\">https://buddhism.stackexchange.com/questions/8577/if-physical-action-causes-reaction-doesnt-mental-action-cause-reaction-too</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Evolution of Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Theory of Evolution of Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/286450326_Introduction_The_Evolution_of_Mind_Brain_and_Culture\">https://www.researchgate.net/publication/286450326_Introduction_The_Evolution_of_Mind_Brain_and_Culture</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Evolution on a Theory of Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/237401773_Evolution_of_a_theory_of_mind\">https://www.researchgate.net/publication/237401773_Evolution_of_a_theory_of_mind</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Psychological View on it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cep.ucsb.edu/papers/Ev_mind_funct_spec.pdf\">https://www.cep.ucsb.edu/papers/Ev_mind_funct_spec.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Need of Relations in Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Relations between Mind and Brain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophynow.org/issues/65/How_Are_The_Mind_And_Brain_Related\">https://philosophynow.org/issues/65/How_Are_The_Mind_And_Brain_Related</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Stop Thinking in Relations:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.wikihow.com/Stop-Over-Thinking-in-a-Relationship\">https://www.wikihow.com/Stop-Over-Thinking-in-a-Relationship</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Or Benefit of Thinking in Relations:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://iversity.org/blog/5-ways-to-benefit-from-critical-thinking-in-relationships\">https://iversity.org/blog/5-ways-to-benefit-from-critical-thinking-in-relationships</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Regularity's of Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophy of Regularity&rsquo;s in Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophydungeon.weebly.com/order--regularity.html\">https://philosophydungeon.weebly.com/order--regularity.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Creation Research:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.icr.org/article/regularity-nature\">https://www.icr.org/article/regularity-nature</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Questions for Atheist:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.patheos.com/blogs/unequallyyoked/2010/11/questions-for-atheists-why-is-there-regularitylaw-in-nature.html\">https://www.patheos.com/blogs/unequallyyoked/2010/11/questions-for-atheists-why-is-there-regularitylaw-in-nature.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Essence of Law:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Tomas Aquinas:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.nlnrac.org/classical/aquinas/documents/question-90-the-essence-of-law\">http://www.nlnrac.org/classical/aquinas/documents/question-90-the-essence-of-law</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Catholic Apologetic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://catholicapologetics.info/catholicteaching/philosophy/essence.htm\">http://catholicapologetics.info/catholicteaching/philosophy/essence.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">DRCPeace:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://drcpeace.org/the-essence-of-law/\">http://drcpeace.org/the-essence-of-law</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Law and Freedom on Mind in Science:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Ideas about Law:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/j.ctt5hhffn\">https://www.jstor.org/stable/j.ctt5hhffn</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Freedom and the Law on Science:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.themontrealreview.com/2009/Freedom-and-the-laws-of-nature.php\">http://www.themontrealreview.com/2009/Freedom-and-the-laws-of-nature.php</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Right to Freedom:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.frontiersin.org/articles/10.3389/frai.2019.00019/full\">https://www.frontiersin.org/articles/10.3389/frai.2019.00019/full</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Barbara_McClintock_(1902-1992)_shown_in_her_laboratory_in_1947.jpg\">https://commons.wikimedia.org/wiki/File:Barbara_McClintock_(1902-1992)_shown_in_her_laboratory_in_1947.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/8185b818f54c4fe5c7b2ef4db53dd4a7.png\" alt=\"\" width=\"344\" height=\"166\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">This problem is Part of the constitutive Set of Engram&rsquo;s of Humanity. The Human Child has this Engram&rsquo;s already before he try to Live. It is given him by the Structure of His Brain. By a Structure which are very important. So important that his Body without this can not be called a Human. Yes it is in our opinion the one of the most constitutive Aspects for Being. Of Being as a Conscious Human. A Human without it couldn&rsquo;t has a analytical Mind. So the Structure of the analytical Mind is created of this Basement. Which is the Basement of the reactive Mind. The Consciousness are the Result of a Process of the reactive Mind. And Yes this is the way the most people like us are thinking of the Speech they are spoken. This Speech gives the analytical Mind his building-blocks. His briks to construct any explicitly Idea. All the rest are Feelings. But Feelings are the cement out of which the brick of ideas are made.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is there a constitutive Set of Engrams:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Engram Configuration Tool:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://arksurvivalevolved.gamewalkthrough-universe.com/dedicatedservers/customizationtools/engrams/Default.aspx\">http://arksurvivalevolved.gamewalkthrough-universe.com/dedicatedservers/customizationtools/engrams/Default.aspx</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Destiny Wiki:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://destiny.fandom.com/wiki/Engram\">https://destiny.fandom.com/wiki/Engram</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Gamepedia.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://ark.gamepedia.com/Engrams\">https://ark.gamepedia.com/Engrams</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Engrams in Evolution and Structures of Brain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Engram in Human Brain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1902435/pdf/procrsmed00153-0115.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1902435/pdf/procrsmed00153-0115.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Scientology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.iscientology.org/scientology-blog/461-engrams\">http://www.iscientology.org/scientology-blog/461-engrams</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Evolution of Human Brain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thoughtco.com/evolution-of-the-human-brain-1224780\">https://www.thoughtco.com/evolution-of-the-human-brain-1224780</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s a conscious Human:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article about Conscious Human:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5924785/pdf/fpsyg-09-00567.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5924785/pdf/fpsyg-09-00567.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Psychology of Awareness:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellmind.com/what-is-consciousness-2795922\">https://www.verywellmind.com/what-is-consciousness-2795922</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">From Conscious to Human Soul:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://dreamcatcherreality.com/consciousness-human-brain-soul\">http://dreamcatcherreality.com/consciousness-human-brain-soul</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Relation between Conscious and Non Concussions Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Level of Mind by Freud:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellmind.com/the-conscious-and-unconscious-mind-2795946\">https://www.verywellmind.com/the-conscious-and-unconscious-mind-2795946</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Trinity of Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://consciousreminder.com/2017/01/26/trinity-mind-conscious-subconscious-unconscious-mind\">https://consciousreminder.com/2017/01/26/trinity-mind-conscious-subconscious-unconscious-mind</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A more Esoteric View on the Trinity of Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ananda.org/ask/levels-of-consciousness-and-what-they-represent\">https://www.ananda.org/ask/levels-of-consciousness-and-what-they-represent</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Thinking in Speech:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Do We think in Language:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://charbonniers.org/2013/07/04/do-we-think-in-language\">https://charbonniers.org/2013/07/04/do-we-think-in-language</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Writing Center:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://writingcenter.unc.edu/tips-and-tools/speeches\">https://writingcenter.unc.edu/tips-and-tools/speeches</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How we think before we Speak</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.zmescience.com/research/how-we-think-before-we-speak-04232\">https://www.zmescience.com/research/how-we-think-before-we-speak-04232</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Feelings:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Introduction to Feelings:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.psychologytoday.com/us/blog/great-kids-great-parents/201603/what-are-feelings\">https://www.psychologytoday.com/us/blog/great-kids-great-parents/201603/what-are-feelings</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Blog Entry of Feelings:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blog.mindvalley.com/what-are-feelings\">https://blog.mindvalley.com/what-are-feelings</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Try to make a Definition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/What-are-feelings-2\">https://www.quora.com/What-are-feelings-2</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Feelings and Non conscious Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Feeling Numb:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.medicalnewstoday.com/articles/320049\">https://www.medicalnewstoday.com/articles/320049</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Mental Disorder:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://kidshealth.org/Nemours/en/parents/az-mental-nonpsychotic.html\">https://kidshealth.org/Nemours/en/parents/az-mental-nonpsychotic.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Role of the non-conscious Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellmind.com/what-is-the-conscious-mind-2794984\">https://www.verywellmind.com/what-is-the-conscious-mind-2794984</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/834edffdb347349f4cab3596e0d65ba9.jpg\" alt=\"\" width=\"344\" height=\"612\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So the first Challenge of the &ldquo;New-Art&rdquo; would be to explore the Relations between Human Liberty and this Regularities. The Regularities of Nature. And the Problem is to choose as Human to be a Member or a Part of Nature or not.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Shorter: The First-Challenge are the Question = Be I self a Part of Nature.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Human as Part of Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Try&rsquo;s to Answer this Question:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/Are-humans-a-part-of-nature\">https://www.quora.com/Are-humans-a-part-of-nature</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia about Nature-Contentedness:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Nature_connectedness\">https://en.wikipedia.org/wiki/Nature_connectedness</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Distinction between Humans and Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/24707479?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/24707479?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Difference between Culture and Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Nature%E2%80%93culture_divide\">https://en.wikipedia.org/wiki/Nature%E2%80%93culture_divide</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Journal Entry:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://journals.openedition.org/elohi/213\">https://journals.openedition.org/elohi/213</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A closer Look at it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.bartleby.com/essay/Differences-Between-Nature-And-Culture-F33845VKRYKW\">https://www.bartleby.com/essay/Differences-Between-Nature-And-Culture-F33845VKRYKW</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Philosophic Problem of Freedom and Nature Regularity&rsquo;s:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lexicon Entry of Nature Laws:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.iep.utm.edu/lawofnat\">https://www.iep.utm.edu/lawofnat</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lexicon Entry Human on Free Will:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/hume-freewill\">https://plato.stanford.edu/entries/hume-freewill</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Imanuell Kant:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/kant\">https://plato.stanford.edu/entries/kant</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Daslook_(Allium_ursinum)_d.j.b_02.jpg\">https://commons.wikimedia.org/wiki/File:Daslook_(Allium_ursinum)_d.j.b_02.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">--------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/37b6a2b9d07bb21191be6190bf89002a.png\" alt=\"\" width=\"344\" height=\"245\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">The Question is: If this the Way to break the Cycle of Cognition. When yes, we have to go back wide behind the Sources of our self. The Way back is by Clearing the Mind. But by Clearing the Mind by analysis of the Occurring Processes of Thinking. As with the analytical Mind, is like to measure the length of a folding rule by his self. And the result should be clear to all intelligent Species.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Auditing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Auditing without Scientology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.accountingedu.org/what-is-auditing.html\">https://www.accountingedu.org/what-is-auditing.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Auditing_(Scientology\">https://en.wikipedia.org/wiki/Auditing_(Scientology</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Scientology self explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.scientology.org/what-is-scientology/the-practice-of-scientology/auditing-in-scientology.html\">https://www.scientology.org/what-is-scientology/the-practice-of-scientology/auditing-in-scientology.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The True of Auditing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Operation Clambake:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.xenu.net/\">http://www.xenu.net/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Skepdic.com:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://skepdic.com/dianetic.html\">http://skepdic.com/dianetic.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Death of a Scientologist:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.chicagoreader.com/chicago/death-of-a-scientologist/Content?oid=909370\">https://www.chicagoreader.com/chicago/death-of-a-scientologist/Content?oid=909370</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Breaking the Cycle of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Cognitive Behavior Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.pharmaceutical-journal.com/news-and-analysis/infographics/cognitive-behavioural-therapy-breaking-the-cycle/20207745.article?firstPass=false\">https://www.pharmaceutical-journal.com/news-and-analysis/infographics/cognitive-behavioural-therapy-breaking-the-cycle/20207745.article?firstPass=false</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Breaking the Cycle Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/21154016\">https://www.ncbi.nlm.nih.gov/pubmed/21154016</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Mindfulness based Cognitive Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://mari.umich.edu/psych-clinic/mindfulness-based-cognitive-therapy-break-the-cycle-of-depression\">https://mari.umich.edu/psych-clinic/mindfulness-based-cognitive-therapy-break-the-cycle-of-depression</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Engrams of Lives before Life:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Three Brains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.interaction-design.org/literature/article/our-three-brains-the-emotional-brain\">https://www.interaction-design.org/literature/article/our-three-brains-the-emotional-brain</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How Brains control Thoughts:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://opentextbc.ca/introductiontopsychology/chapter/3-2-our-brains-control-our-thoughts-feelings-and-behavior/\">https://opentextbc.ca/introductiontopsychology/chapter/3-2-our-brains-control-our-thoughts-feelings-and-behavior/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The History of our Brain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.newscientist.com/article/mg21128311-800-a-brief-history-of-the-brain\">https://www.newscientist.com/article/mg21128311-800-a-brief-history-of-the-brain</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Computed_tomography_of_human_brain_-_large.png\">https://commons.wikimedia.org/wiki/File:Computed_tomography_of_human_brain_-_large.png</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">--------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/476c27bf990fc99eb77bb5742e8d2335.jpg\" alt=\"\" width=\"344\" height=\"344\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So we believe and confirm: The only right Way Back can only lead through and not against the reactive mind. And mainly this is in our opinion the Mistake of the Theory of Scientology. But the way we show are the Way by Art is the right way. The right way back to recreation a free Mind. And the Art which takes this Challenge called the &ldquo;The New-Art&rdquo;.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Modern Therapy&rsquo;s by Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Development of modern Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/1573248?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/1573248?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Arts in Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.moma.org/calendar/exhibitions/3127\">https://www.moma.org/calendar/exhibitions/3127</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Art Therapy&rsquo;s:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/320322855_Art_therapy_in_the_modern_world_and_music_therapy_in_particular\">https://www.researchgate.net/publication/320322855_Art_therapy_in_the_modern_world_and_music_therapy_in_particular</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art which revolves the Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Moma.org:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.moma.org/learn/moma_learning/themes/surrealism/tapping-the-subconscious-automatism-and-dreams\">https://www.moma.org/learn/moma_learning/themes/surrealism/tapping-the-subconscious-automatism-and-dreams</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Moma.org at modern Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.moma.org/learn/moma_learning/themes/what-is-modern-art\">https://www.moma.org/learn/moma_learning/themes/what-is-modern-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Art Story on Surrealism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theartstory.org/movement/surrealism\">https://www.theartstory.org/movement/surrealism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Depression-underlying-issues.jpg\">https://commons.wikimedia.org/wiki/File:Depression-underlying-issues.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/c751f1fe52edcc129458df45cda1dd14.jpg\" alt=\"\" width=\"344\" height=\"242\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So what should New-Art been:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">New Art is Art which belongs to a later concretion.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">New Art is Art which is a Method a Process not his Result.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">New Art is Art which leads to a personal and unique Work for the customer.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Access to Art-Therapy for everyone:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Access Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.mind.org.uk/information-support/drugs-and-treatments/arts-and-creative-therapies/accessing-arts-and-creative-therapies\">https://www.mind.org.uk/information-support/drugs-and-treatments/arts-and-creative-therapies/accessing-arts-and-creative-therapies</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Effectiveness of Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6124538/pdf/fpsyg-09-01531.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6124538/pdf/fpsyg-09-01531.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Remove Traumatic Processes with Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/222662583_Accessing_traumatic_memory_through_art_making_An_art_therapy_trauma_protocol_ATTP\">https://www.researchgate.net/publication/222662583_Accessing_traumatic_memory_through_art_making_An_art_therapy_trauma_protocol_ATTP</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Focus of this Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why Art Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thearttherapypractice.com/why-art-therapy\">https://www.thearttherapypractice.com/why-art-therapy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Focusing of Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://previous.focusing.org/folio/Vol21No12008/12_FocusingOrientTRIB.pdf\">http://previous.focusing.org/folio/Vol21No12008/12_FocusingOrientTRIB.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How Kind of Therapy is Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://careersinpsychology.org/art-therapy\">https://careersinpsychology.org/art-therapy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Study-Analysis of the Artist Therapist:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Analysis of a Article of Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/235983316_Critical_Analysis_of_Art_Therapy_Article\">https://www.researchgate.net/publication/235983316_Critical_Analysis_of_Art_Therapy_Article</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art and Public Healing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2804629/pdf/254.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2804629/pdf/254.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Teaching by Art-Therapy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/282289568_Art_Therapy_in_Schools-The_Therapist's_Perspective\">https://www.researchgate.net/publication/282289568_Art_Therapy_in_Schools-The_Therapist's_Perspective</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Art_Mediums_commonly_used_for_Art_Therapy.JPG\">https://commons.wikimedia.org/wiki/File:Art_Mediums_commonly_used_for_Art_Therapy.JPG</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/5464cc8639172ae2b66104ea99cdf522.jpg\" alt=\"\" width=\"344\" height=\"209\" /><img src=\"/media/uploads/user/f1561472ad00726959db2937eb6bba74.gif\" alt=\"\" width=\"344\" height=\"62\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">New Art is Art which opens the mind of the customer to the Roots of his being.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">New Art is Art which leads to a Revolution of the mind. A Revolution in the Sense of call back the Roots of Being.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">New Art is Art which is philosophical. It is philosophical in the Sense that it shows the logical Gap&rsquo;s of everyday understanding the world.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Arts Leading to the World of Tomorrow:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Young Artists have the Power:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theguardian.com/culture-professionals-network/2015/sep/07/young-artists-power-future-jane-hackett\">https://www.theguardian.com/culture-professionals-network/2015/sep/07/young-artists-power-future-jane-hackett</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art-Installation: Is this Tomorrow:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.whitechapelgallery.org/exhibitions/is-this-tomorrow\">https://www.whitechapelgallery.org/exhibitions/is-this-tomorrow</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How Arts can Change the World:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.americansforthearts.org/by-program/reports-and-data/legislation-policy/what-is-arts-policy\">https://www.americansforthearts.org/by-program/reports-and-data/legislation-policy/what-is-arts-policy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The gentle revolution through art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A First Revolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.remediosrapoport.com/the-gentle-revolution\">http://www.remediosrapoport.com/the-gentle-revolution</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Gentle Revolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://medium.com/the-gentle-revolution\">https://medium.com/the-gentle-revolution</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Gentle Revolution.nz:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.gentlerevolution.nz/\">https://www.gentlerevolution.nz</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art leading to the Bones of Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">To the Bone:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.buzzfeednews.com/article/krystieyandoli/behind-the-controversial-storyline-in-to-the-bone-about-art\">https://www.buzzfeednews.com/article/krystieyandoli/behind-the-controversial-storyline-in-to-the-bone-about-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">An Ally in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.zocalopublicsquare.org/2016/11/01/skull-ally-art/viewings/glimpses\">https://www.zocalopublicsquare.org/2016/11/01/skull-ally-art/viewings/glimpses</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">BioHacking:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.redbull.com/int-en/artist-biohackers\">https://www.redbull.com/int-en/artist-biohackers</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophical Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art &amp; Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.the-philosophy.com/art-aesthetics\">https://www.the-philosophy.com/art-aesthetics</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Category Philosophers on Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Category:Philosophers_of_art\">https://en.wikipedia.org/wiki/Category:Philosophers_of_art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Philosophy of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thenation.com/article/archive/philosophy-art/\">https://www.thenation.com/article/archive/philosophy-art/</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Philosophy of logical Gaps:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophical Forum:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blogs.lse.ac.uk/theforum/logical-gaps\">https://blogs.lse.ac.uk/theforum/logical-gaps</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Aristotle Logic Works:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://historyofphilosophy.net/aristotle-logic\">https://historyofphilosophy.net/aristotle-logic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Article about Logical Gaps:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://eprints.lse.ac.uk/71433/1/33%20The%20Forum%20%E2%80%93%20Logical%20Gaps.pdf\">http://eprints.lse.ac.uk/71433/1/33%20The%20Forum%20%E2%80%93%20Logical%20Gaps.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/1ba23051b2a1996e0dd6be2070b7d649.jpg\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">New Art is Revolution of the Understanding and Believe of us self and the World.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Not more should the Leaders be Philosopher's &hellip;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&hellip; but the Artists should be Leaders.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Leaders should become Philosophers:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What Platon can teach us:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cityu.edu/blog/what-plato-can-teach-us-about-leadership-part-1-of-2\">https://www.cityu.edu/blog/what-plato-can-teach-us-about-leadership-part-1-of-2</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What Leaders can learn from Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.businessphilosopher.com/2018/06/22/what-leaders-ceos-can-learn-philosophy\">https://www.businessphilosopher.com/2018/06/22/what-leaders-ceos-can-learn-philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Platon on Leadership:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/25073123?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/25073123?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Leaders should become Artists:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Every Leader is an Artist:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://hbr.org/2012/08/every-leader-is-an-artist\">https://hbr.org/2012/08/every-leader-is-an-artist</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How Artist and Leader are Different:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://hbr.org/2009/05/how-artistleaders-do-things-di\">https://hbr.org/2009/05/how-artistleaders-do-things-di</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Can Leadership get more Performance by Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.iedc.si/blog/single-blog-post/iedc-alumni-insights/2016/12/01/can-the-arts-really-help-and-inspire-better-leadership-performance\">https://www.iedc.si/blog/single-blog-post/iedc-alumni-insights/2016/12/01/can-the-arts-really-help-and-inspire-better-leadership-performance</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Great Show:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Illusion of self:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.newscientist.com/round-up/self\">https://www.newscientist.com/round-up/self</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Mindfulness Exercises:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://mindfulnessexercises.com/great-illusion-reality\">https://mindfulnessexercises.com/great-illusion-reality</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why Reality is Illusion:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://medium.com/personal-growth/reality-is-an-illusion-e03e779408b8\">https://medium.com/personal-growth/reality-is-an-illusion-e03e779408b8</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And the True:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia can explain Maja:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Maya_(religion\">https://en.wikipedia.org/wiki/Maya_(religion</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Encyclopedic on Hinduism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://archive.org/details/illustratedencyc0000loch\">https://archive.org/details/illustratedencyc0000loch</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Mental Factors:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://studybuddhism.com/en/advanced-studies/science-of-mind/mind-mental-factors/primary-minds-and-the-51-mental-factors\">http://studybuddhism.com/en/advanced-studies/science-of-mind/mind-mental-factors/primary-minds-and-the-51-mental-factors</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Naga_-_Buddhism.jpg\">https://commons.wikimedia.org/wiki/File:Naga_-_Buddhism.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p lang=\"en-US\" style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So this is the Manifest of &ldquo;New-Art&rdquo;</p>\n<p lang=\"en-US\" style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p lang=\"en-US\" style=\"margin-bottom: 0cm; line-height: 100%;\">CreCo</p>",
        "topics": [
            {
                "id": 96,
                "name": "Contemporary",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 390,
                "name": "Manifesto",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 389,
                "name": "New-art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17628,
            "forum_user": {
                "id": 17624,
                "user": 17628,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6389f37aeaee190f92e385b6a9b395f6?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "creco",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-manifesto-of-new-art-iii",
        "pk": 644,
        "published": false,
        "publish_date": "2020-04-25T19:08:57.082203+02:00"
    },
    {
        "title": "Elements of technique and language for performing and composing with the digital musical instrument Karlax: Case study of Instrumental Interaction III for guitar and Karlax by Benjamin Lavastre and Brice Gatinet",
        "description": "This talk presents elements of technique and language for composing and performing with the Karlax MIDI controller. Developed in the early 2010s, this interface is supported by an active community of composers/performers and an extensive repertoire. In this sense, the Karlax is an ideal candidate for a second phase of development with a DMI. Playing the Karlax or composing for this instrument requires a range of techniques and leads us to reconsider the place of instruments in the musical practices. To this end, this presentation describes different types of technique for composition and interpretation from the piece Instrumental Interaction III (Gatinet & Lavastre, 2024), and proposes a philosophical and aesthetic framework, based on the main issues of sonic identity, the sound-gesture relationship, the interaction strategies and the perception.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><img src=\"/media/uploads/ircamworkshop-gatinet-lavastre-photo2.png\" alt=\"\" width=\"1310\" height=\"592\" /></p>\r\n<p>Presented by : Benjamin Lavastre, Brice Gatinet</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/benjaminL/\" target=\"_blank\">Biography Benjamin Lavastre</a></p>\r\n<p><a href=\"https://forum.ircam.fr/profile/bricegatinet/\" target=\"_blank\">Biography by Brice Gatinet</a></p>\r\n<p><a href=\"https://www.youtube.com/watch?v=fgcS_0ZRasI\" target=\"_blank\">Demonstration video</a></p>\r\n<p>This presentation looks back on more than four years of experience in composition, performance, research, analysis and teaching with the Karlax digital musical instrument (DMI). The Karlax is a two-handed interface whose main sensors are ten continuous keys, eight pistons giving velocity indications, an inertial unit with three axes of accelerometers and gyroscopes. It also features a rotating axis at its center. Its design enables a large amount of control to be generated from simple gestures. Developed in the early 2010s, this MIDI controller has been praised for its design qualities, and draws on an active composer/performer community and extensive repertoire. This makes the Karlax an ideal candidate for a second, more in-depth phase with an DMI. Playing the Karlax or composing for this instrument requires a range of techniques and leads us to reconsider the place of instruments in musical practices. This presentation comments on a number of specific techniques, using concrete examples from the piece<em><span>&nbsp;</span>Instrumental Interaction III</em>, and defines a framework for reflection around the definition of an instrumental identity, the place of gesture, interaction strategies and perception.&nbsp;</p>\r\n<p>After more than 50 years' experience with digital musical instruments (DMIs) and the appearance of the first conclusive prototypes, notably Michel Waisvisz's The Hands instrument developed in the mid-1980s, several issues have been identified. Among these, the sound-gesture relationship constitutes a fundamental difference between digital and acoustic instruments. For acoustic instruments, this relationship is given by the physical behaviors of vibrating structures (e.g. strings, membranes, reeds or air columns, etc.). These structures vibrate in their own way, based on the properties of the materials. In other words, even if strings, reeds and membranes observe complex vibration patterns, these structures can only vibrate in a limited number of ways.&nbsp;These structures vibrate in their own way, based on the properties of the materials. In other words, although strings, reeds and membranes observe complex vibration patterns, these structures can only vibrate in a limited number of ways. The performer's gestures and the resulting sounds are, however, inextricably linked by physical laws. Digital musical instruments (DMIs), on the other hand, consist of an interface connected to a sound-generating device (e.g. a computer and speakers), the two being linked by applications (mapping) defining the relationship between the performer's gestures and the resulting sounds. For DMIs, the sound-generating algorithm determines the &ldquo;vibrations&rdquo; that the instrument produces. Also, the sound-gesture relationship is arbitrarily defined by the instrument designers, composers or performers. There is no inherent connection between the performer's actions and the resulting sound, which defines an unlimited number of possibilities for sound-gesture associations. The composers and performers of an DMI must then stage the desired sound result.</p>\r\n<p>Furthermore, with regard to design and conception, DMIs must meet certain requirements such as robustness, stability, precision, reproducibility and rapid response (low latency). These design qualities must enable not only good control quality but also instrumental virtuosity. Finally, access to DMIs happens to be relatively restricted, with few instruments making it beyond the prototype stage. Also, most DMIs encounter difficulties in establishing themselves over time and are most often played by a single performer. Consequently, it is difficult to define, for a given instrument, the &ldquo;habitus&rdquo; of interpretation, composition and listening necessary for its evolution. It is therefore necessary to build a creative community around a &ldquo;tried and tested&rdquo; DMI. This can be based on a varied and demanding repertoire of pieces exploring several expressive facets, and tools facilitating interpretation and composition. The choice of instrument must also take into account the possibility of repairs (renewal of worn parts, replacement of sensors, etc.). Finally, compositional strategies must also anticipate issues related to the obsolescence of computing environments housing programming, mapping and sound synthesis.</p>\r\n<p>With these observations in mind, this presentation focus on the Karlax and proposes several composition and performance strategies from the piece<span>&nbsp;</span><em>Instrumental Interaction III<span>&nbsp;</span></em>(Gatinet &amp; Lavastre, 2024) notably data conditioning, playing techniques, mapping, types of sound synthesis, spatialization, programming and notation. For this composition project, we developed several aspects of writing that reflect the functioning of DMIs, such as parametric writing in the form of strata with different temporalities, the place of gesture both physical and musical, and the development of interaction strategies inspired by metaphors linked to computer music. This project has identified a number of key issues for the composition, performance and perception of a mixed piece with Karlax, involving current computer systems and significant control possibilities (real-time processing, multi-channel broadcasting system). These include the creation of the Karlax sound universe, the management of events throughout the piece, latency problems, the difficulty of making musical intentions perceptible through gestures, and the writing and control of spatialization. Finally, Karlax offers many interesting aspects for both composer and performer: the composer creates a sound and performance space by delineating difficulties and expressive potential, while the intermodal ambiguity of the couple gesture-sound offers a new layer of meaning.</p>\r\n<p>The Karlax may correspond to the needs of a musician looking for an interface with high control quality that can be integrated into different contexts, particularly in interaction with acoustic musical instruments. So, what kind of society would the Karlax be an option for a composer, performer or listener? What updates to the interface and tools, and what repertoire, would be required? Karlax's place in the DMI landscape is unique and endowed with great potential, but the obstacles remain significant, particularly when it comes to reproducing pieces. With this in mind, this presentation uses the example of an interface to examine the role of instruments in contemporary musical creation.</p>\r\n<p></p>",
        "topics": [
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 11717,
            "forum_user": {
                "id": 11714,
                "user": 11717,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/B-Lavastre.jpg",
                "avatar_url": "/media/cache/37/50/3750be91ae20defc6ac6cdb890e962d5.jpg",
                "biography": "Benjamin Lavastre is a composer, guitarist and researcher. He trained at the Haute École de Musique de Genève with Michaell Jarrell, Luis Naón, Eric Daubresse and Pascal Dusapin, and at the Hochschule Weimar with Michael Obst. He is currently pursuing a PhD at McGill University in Montreal with Philippe Leroux. \nHis research focuses on interactions between digital musical instruments (notably the Karlax) and acoustic instruments. He has published several articles and book chapters, notably for the conference CMMR and Sonic Design Anthology in collaboration with Marcelo M. Wanderley.\nHis works have been performed by such prestigious ensembles and conductors as the MDR Symphony Orchestra, the TANA Quartet, the Contemporary Music Ensemble, the Quasar Sax Quartet, Ensemble Éclat, Lorraine Vaillancourt, Guillaume Bourgogne, Kanako Abe, Ullrich Kern and at Archipel,ZKM and Impuls festivals, among others. He won the Prix du conseil de Genève in 2018 and the Prix Paléo Festival de Nyon in 2020. Also a guitarist, he plays a varied repertoire ranging from contemporary music to jazz. His works are published by Babelscores.",
                "date_modified": "2026-02-23T21:52:11.466328+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "benjaminL",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "elements-of-technique-and-language-for-performing-and-composing-with-the-digital-musical-instrument-karlax-case-study-of-instrumental-interaction-iii-for-guitar-and-karlax",
        "pk": 3198,
        "published": true,
        "publish_date": "2025-01-03T23:56:37+01:00"
    },
    {
        "title": "Tutoriel Modalys n°5 : Forcing The Membrane",
        "description": "Cinquième partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques dans Modalisp, OpenMusic et Max.",
        "content": "<p style=\"text-align: justify;\"><strong>Dans ce tutoriel, nous essayons la frappe et la connexion de force sur une membrane.</strong></p>\r\n<p style=\"text-align: justify;\"><br />Frapper une membrane dans Modalys est assez simple. Mais la liaison de force est &eacute;galement int&eacute;ressante, car elle ne n&eacute;cessite pas de frapper un objet. En fait, la connexion de force est d&eacute;crite dans la documentation comme la connexion de passage chaque fois que cela est possible. Bien que le contr&ocirc;le de la connexion de force puisse n&eacute;cessiter quelques ajustements, il semble plus naturel que la simple connexion de frappe.</p>\r\n<h6 style=\"text-align: justify;\"></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/IC7C74NFYFs\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: left;\"><strong>Ce tutoriel a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.&nbsp;</strong></p>",
        "topics": [
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 133,
                "name": "Sound synthesis and treatment",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n5-forcing-the-membrane",
        "pk": 727,
        "published": true,
        "publish_date": "2020-09-15T10:00:00+02:00"
    },
    {
        "title": "Eye of the Storm - Shruti Nagaraj",
        "description": "une installation visuelle réactive à l'audio ambisonique et aux données météorologiques qui réunit le violon improvisé et la météo en direct à Londres.",
        "content": "<p><span><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></span></p>\r\n<p><span></span></p>\r\n<p><span>Pr&eacute;sent&eacute; par:&nbsp;Shruti Nagaraj<br /><a href=\"https://forum.ircam.fr/profile/shrutinagaraj/\">Biographie<br /><br /></a></span></p>\r\n<p>&Agrave; l'heure de l'incertitude environnementale, du changement climatique et du flux culturel, l'exploration des sch&eacute;mas m&eacute;t&eacute;orologiques et de l'improvisation musicale rappelle l'interconnexion de l'homme avec le paysage sonore naturel qui l'entoure. En approfondissant cette relation symbiotique entre les ph&eacute;nom&egrave;nes atmosph&eacute;riques spontan&eacute;s et l'expression improvis&eacute;e, j'essaie de d&eacute;velopper une compr&eacute;hension plus profonde du monde naturel et de l'interaction humaine.</p>\r\n<p>Par la m&eacute;thode d'une installation immersive, ce projet capture et documente l'interaction au sein de l'improvisation d'une performance de violon solo, &agrave; travers les contraintes de l'espace et les mouvements cycliques de l'interpr&egrave;te - <strong>Julia Br&uuml;ssel.</strong></p>\r\n<p>La musique improvis&eacute;e de jazz contemporain se caract&eacute;rise par sa spontan&eacute;it&eacute; et ses brusques changements d'&eacute;nergie. Les deux ph&eacute;nom&egrave;nes embrassent l'impr&eacute;visibilit&eacute;, ce qui rend chaque &eacute;v&eacute;nement unique. Ce projet vise &agrave; documenter les possibilit&eacute;s infinies de l'improvisation dans les performances musicales et &agrave; lier l'impr&eacute;visibilit&eacute; et la spontan&eacute;it&eacute; de l'improvisation en direct &agrave; des param&egrave;tres m&eacute;t&eacute;orologiques inter-r&eacute;actifs et spontan&eacute;s similaires, tels que la pluie et le vent.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1876,
                "name": "free improv",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1822,
                "name": "free improvisation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1152,
                "name": "installation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1247,
                "name": "spatial audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 273,
                "name": "Touchdesigner",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1875,
                "name": "violin",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1877,
                "name": "weather",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 53545,
            "forum_user": {
                "id": 53483,
                "user": 53545,
                "first_name": "Shruti",
                "last_name": "Nagaraj",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/8ab05e6810b189828d11159cdc7f3221?s=120&d=retro",
                "biography": "Shruti Nagaraj (b. 2000) is a multimedia artist and photographer from Bangalore, India, currently based in London, UK.\nHer work revolves around music and the intersections of gender and the environment.\nShe is currently an M.A Digital Direction student at the Royal College of Art, and is a member of Diversify Photo's 'Up Next' selection of up-and-coming photographers.",
                "date_modified": "2024-03-13T01:08:35.009364+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "shrutinagaraj",
            "first_name": "Shruti",
            "last_name": "Nagaraj",
            "bookmarks": []
        },
        "slug": "eye-of-the-storm",
        "pk": 2815,
        "published": true,
        "publish_date": "2024-03-07T15:01:45+01:00"
    },
    {
        "title": "Creating immersive acoustics for virtual worlds using Elliptique by Benoît Alary",
        "description": "As a follow-up to the November 6th presentation (Acoustics for musicians: from concert halls to virtual realities by Benoit Alary), this workshop will demonstrate the use of Elliptique, a spatial reverberator recently developed for Spat5 and Max/MSP. We will take an in-depth look at how this reverberator can be used to create complex spatial reverberation during a live performance. Through a series of demonstrations and sound examples, we will explore different ways of using Elliptique. We will produce both realistic reverberation as well as ethereal acoustics, and discuss how to design a Max patch that can adapt to different multichannel systems. Prior experience with Spat5 or Max/MSP is recommended, but the key concepts presented in this workshop also apply beyond the scope of these technologies, so anyone interested in immersive sound technologies is welcome to attend.",
        "content": "<p style=\"font-weight: 400;\">As a follow-up to the November 6<sup>th</sup>&nbsp;presentation (<em>Acoustics for musicians: from concert halls to virtual realities&nbsp;by Benoit Alary</em>), this workshop will demonstrate the use of Elliptique, a spatial reverberator recently developed for Spat5 and Max<span>/MSP. We&nbsp;</span>will take an in-depth look at how this reverberator can be used to create complex spatial reverberation during a live performance. Through a series of demonstrations and sound examples, we will explore different ways of using Elliptique. We will produce both realistic reverberation as well as ethereal acoustics, and discuss how to design a Max patch that can adapt to different multichannel systems. Prior experience with Spat5 or Max/MSP is recommended, but the key concepts presented in this workshop also apply beyond the scope of these technologies, so anyone interested in immersive sound technologies is welcome to attend.</p>",
        "topics": [],
        "user": {
            "pk": 24564,
            "forum_user": {
                "id": 24537,
                "user": 24564,
                "first_name": "Benoit",
                "last_name": "Alary",
                "avatar": "https://forum.ircam.fr/media/avatars/BA_2021_06.jpg",
                "avatar_url": "/media/cache/27/b3/27b31b6ef7aaf23499bed29603125e56.jpg",
                "biography": "Benoit Alary is a researcher in the Acoustic and Cognitive Spaces team of the STMS lab, part of IRCAM. He has over fifteen years of experience in immersive audio, shared between industry and academia, including a Ph.D. in acoustics and signal processing from Aalto University (Finland) and an MSc from the University of Edinburgh. His research centers around sound reproduction, analysis/synthesis, and perception. His current projects involve artificial reverberation, 6DoF sound reproduction, machine learning, and virtual acoustics.",
                "date_modified": "2025-11-07T10:18:43.509252+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 317,
                        "forum_user": 24537,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-07",
                        "type": 0,
                        "keys": [
                            {
                                "id": 566,
                                "membership": 317
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "balary",
            "first_name": "Benoit",
            "last_name": "Alary",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3070,
                    "user": 24564,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "creating-immersive-acoustics-for-virtual-worlds-using-elliptique-by-benoit-alary",
        "pk": 3071,
        "published": true,
        "publish_date": "2024-10-24T16:05:05+02:00"
    },
    {
        "title": "Écoutez le nouveau single dépouillé de Jorja Smith.",
        "description": "A côté d'un clip vidéo entièrement tourné sur webcam",
        "content": "<p><img style=\"display: block; margin-left: auto; margin-right: auto;\" src=\"https://www.nme.com/wp-content/uploads/2021/03/jorja-smith-addicted-ss-696x442.jpg\" alt=\"\" width=\"628\" height=\"399\" /></p>\n<p><strong>Jorja Smith</strong><span>&nbsp;a marqu&eacute; sa premi&egrave;re sortie en solo de 2021, avec la sortie de son nouveau single et de sa nouvelle vid&eacute;o `` Addicted ''.</span></p>\n<p><span>La premi&egrave;re chanson que Smith a publi&eacute;e seule depuis l'ann&eacute;e derni&egrave;re,&nbsp;</span><span>`<em>` By Any Means</em> ''</span><span>&nbsp;,&nbsp;politiquement charg&eacute;e&nbsp;, `` Addicted '' est un peu plus discr&egrave;te que ses chansons r&eacute;centes, mettant en lumi&egrave;re sa voix sensuelle.</span></p>\n<p><span>Parall&egrave;lement &agrave; la chanson, Smith a &eacute;galement d&eacute;voil&eacute; le clip de 'Addicted', qu'elle a r&eacute;alis&eacute; aux c&ocirc;t&eacute;s de Savanah Leaf.&nbsp;La vid&eacute;o a &eacute;t&eacute; enti&egrave;rement tourn&eacute;e sur webcam et voit Smith &agrave; cheval sur la plage, dansant sous des feux d'artifice et plus encore.</span></p>\n<p><span>Regardez la vid&eacute;o de 'Addicted' ci-dessous:</span></p>\n<p><span><iframe src=\"//www.youtube.com/embed/FFzH_9guHQM\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\"></iframe></span></p>\n<p><span>Dans un communiqu&eacute; de presse, Smith a d&eacute;clar&eacute; que &laquo;Addicted&raquo; consiste &agrave; &laquo;se concentrer sur le fait de vouloir toute l'attention de quelqu'un qui ne donne pas assez (ou pas du tout) quand il devrait l'&ecirc;tre&raquo;.</span></p>\n<p><span>Parlant de la vid&eacute;o musicale incroyablement bricolage, elle a d&eacute;clar&eacute;: &laquo;la vid&eacute;o est de multiples versions de moi chantant la chanson;&nbsp;m'amuser &agrave; m'habiller, ne pas essayer d'&ecirc;tre trop s&eacute;rieux et me donner plus de libert&eacute; &raquo;.</span></p>\n<p><span>&laquo;Addicted&raquo; est la premi&egrave;re piste que Smith m&egrave;ne depuis qu'elle a fait &eacute;quipe avec&nbsp;</span><span>Popcaan</span><span>&nbsp;pour&nbsp;</span><span>&laquo;Come Over&raquo; de l'</span><span>&nbsp;ann&eacute;e derni&egrave;re&nbsp;.&nbsp;Elle a &eacute;t&eacute; sollicit&eacute;e par&nbsp;</span><span>Enny</span><span>&nbsp;pour sauter sur un remix des anciennes&nbsp;</span><span>&laquo;Peng Black Girls&raquo;</span><span>&nbsp;.</span></p>\n<p><span>Plus t&ocirc;t cette ann&eacute;e, Jorja Smith a r&eacute;v&eacute;l&eacute; qu'elle&nbsp;dirigeait sa propre s&eacute;rie BBC Radio 3 &laquo;Tearjerker&raquo;&nbsp;, sur le pouvoir de gu&eacute;rison de la musique.</span></p>",
        "topics": [],
        "user": {
            "pk": 20993,
            "forum_user": {
                "id": 20982,
                "user": 20993,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/0fbcb97df82fd922856fcd2104aca9fa?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "levidua",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ecoutez-le-nouveau-single-depouille-de-jorja-smith",
        "pk": 937,
        "published": false,
        "publish_date": "2021-03-11T09:50:20.109437+01:00"
    },
    {
        "title": "Somax2 2.41 released",
        "description": "Somax2 2.41 released, along with quick access to rich interaction demo/tutorial",
        "content": "<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div><strong>Somax2 version 2.4.1 is released.</strong></div>\r\n<div>&nbsp;</div>\r\n<div><img alt=\"Capture d&amp;rsquo;&eacute;cran, le 2023-02-25 &agrave; 17.37.42.png\" id=\"&lt;13127DE3-860C-45F3-AFFA-ACCF9C3A285C&gt;\" src=\"https://forum.ircam.fr/media/uploads/user/f827bc96a599627385779240af517e07.png\" /></div>\r\n<div>&nbsp;</div>\r\n<div>This version is fully operational with a sophisticated GUI that lets you create co-creative agents and&nbsp;</div>\r\n<div>make them interact / improvise with the external world and between themselves.</div>\r\n<div>&nbsp;</div>\r\n<div>As promised to many, we have prepared&nbsp;a set to&nbsp;help you<strong>&nbsp;jump (almost) immediately to real-life, rich interactions&nbsp;</strong>with Somax2<strong>,</strong>&nbsp;with ready-made players and music corpuses, carefully adjusted parameter initializations, and a single 10 steps tutorial.</div>\r\n<div>&nbsp;</div>\r\n<div>Rich musical corpuses (midi and audio) have been built from great masters (modern, contemporary, jazz etc.) so you may readily experiment rich interaction, before building your own corpuses.</div>\r\n<div>&nbsp;</div>\r\n<div>You can first get a precise idea of what to expect from this set by&nbsp;<strong>watching first the&nbsp;demo videos&nbsp;</strong>at :</div>\r\n<div>&nbsp;</div>\r\n<div><a href=\"https://vimeo.com/showcase/10189064\">https://vimeo.com/showcase/10189064</a></div>\r\n<div>&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div><strong>To get going, go to&nbsp;<a href=\"https://nubo.ircam.fr/index.php/s/sSXQcqWM3gFHYbi\">this repository,</a>&nbsp;and download&nbsp;</strong>all the content to your machine, then follow the readme file instructions (or install first Somax2 2.4.1 from the forum page)</div>\r\n<div>&nbsp;</div>\r\n<div>Have fun, all best</div>\r\n<div>&nbsp;</div>\r\n<div>G&eacute;rard Assayag and the REACH project fellows.</div>\r\n<div>&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div>***</div>\r\n<div>&nbsp;</div>\r\n<div><em>Stay tuned at &nbsp;<a href=\"http://repmus.ircam.fr/somax2\">repmus.ircam.fr/somax2</a>&nbsp; for installations, updates and new materials</em></div>\r\n<div><span><em>&nbsp;</em></span></div>\r\n<div><span><em>Watch / Listen to recent Somax2 realizations as they unfold in Jo&euml;lle L&eacute;andre&rsquo;s residency :<strong>&nbsp;</strong></em></span><a href=\"https://www.stms-lab.fr/article/joelle-leandre-en-residence-a-lircam\">REACHing OUT !&nbsp;</a></div>\r\n<div>&nbsp;</div>\r\n<div>***</div>\r\n<div>&nbsp;</div>\r\n<div>Somax 2.5 is in preparation : this version will add full max-style programming/messaging &nbsp;interface, with accessibility to base objects (without GUI) to create your personal workflow / GUI.&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div><img alt=\"Capture d&amp;rsquo;&eacute;cran, le 2023-02-22 &agrave; 17.51.47.png\" id=\"&lt;DB2552F3-5F96-4B2B-BC61-F380B02F028C&gt;\" src=\"https://forum.ircam.fr/media/uploads/user/4c21a16fe9e832591fd5043a02391f4e.png\" /></div>\r\n<div><em>Learn more about the&nbsp;</em><a href=\"https://www.stms-lab.fr/projects/pages/reach/\">REACH project&nbsp;</a></div>\r\n<div>&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>\r\n<div>--<br /><a href=\"https://www.stms-lab.fr/person/gerard-assayag/\">G&eacute;rard Assayag</a>, Research Director&nbsp;@&nbsp;<a href=\"http://www.ircam.fr/\">IRCAM</a>&nbsp;<a href=\"https://www.stms-lab.fr/\">STMS</a>&nbsp;Lab&nbsp;</div>\r\n<div>Head&nbsp;<a href=\"https://www.stms-lab.fr/team/representations-musicales/\">Music Representation</a>&nbsp;Team</div>\r\n<div>PI, ERC ADG&nbsp;<a href=\"https://ins2i.cnrs.fr/fr/cnrsinfo/la-co-creativite-musicale-entre-humain-et-machine-erc-advanced-grant-de-gerard-assayag\">REACH</a></div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 1200,
                "name": "cocreativity",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1201,
                "name": "Creative agents",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 747,
                "name": "somax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "somax2-241-released",
        "pk": 2111,
        "published": true,
        "publish_date": "2023-03-03T19:52:08+01:00"
    },
    {
        "title": "I Suppose He Will Miss The Sea Too - Peilin Li and Peijun Jiang",
        "description": "\"Je suppose que la mer lui manquera aussi\" prendra la forme d'un théâtre interactif sonore, qui emprunte à la théorie de la \"simulation\" de Baudrillard et à l'explication de la solitude existentielle de Heidegger. Il s'agit de créer une image virtuelle de la mer de sons à partir de la mémoire de l'ancienne race pour une existence réelle - un pin solitaire. Ce projet tente de montrer une existence éternelle et indélébile et la solitude entre le ciel et la terre à travers la déconstruction et la reconstruction d'un pin.\r\n\r\nCe projet est le fruit d'une collaboration entre les artistes numériques Peilin Li et Peijun Jiang.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par :<span>&nbsp;</span>Peilin Li and Peijun Jiang<br /><a href=\"https://forum.ircam.fr/profile/jlddy/\">Biographie Peilin Li</a><br /><a href=\"https://forum.ircam.fr/profile/peijunjiang23/\">Biographie Peijun Jiang</a></p>\r\n<p></p>\r\n<p>Inspir&eacute; par un voyage de prise de conscience d'un pin. Lorsque je l'ai vu, il se tenait seul sur la vaste terre, vide, propre et solitaire. Le bruissement du vent passant &agrave; travers les aiguilles de pin m'a fait penser &agrave; un mot de la litt&eacute;rature traditionnelle chinoise, le \"SongTao\". Il d&eacute;crivait le bruit du vent soufflant dans la for&ecirc;t de pins, comme les vagues de la mer.</p>\r\n<p></p>\r\n<p>Le vent passait &agrave; travers les aiguilles de pin. Je suppose que la mer lui manquera aussi.</p>\r\n<p></p>\r\n<p>\"Je suppose que la mer lui manquera aussi\" prendra la forme d'un th&eacute;&acirc;tre interactif sonore, qui emprunte &agrave; la th&eacute;orie de la \"simulation\" de Baudrillard et &agrave; l'explication de la solitude existentielle de Heidegger. Il s'agit de cr&eacute;er une image virtuelle de la mer de sons &agrave; partir de la m&eacute;moire de l'ancienne race pour une existence r&eacute;elle - un pin solitaire. Ce projet tente de montrer une existence &eacute;ternelle et ind&eacute;l&eacute;bile et la solitude entre le ciel et la terre &agrave; travers la d&eacute;construction et la reconstruction d'un pin.</p>\r\n<p></p>\r\n<p>Le vent voyage &agrave; travers la for&ecirc;t et sous l'arbre, devenant l'&eacute;cho de l'arbre comme la vague de la mer.</p>\r\n<p></p>\r\n<p>Dans la performance sonore en direct, les vents seront exploit&eacute;s comme un &eacute;l&eacute;ment central, tissant une symphonie grandiose de vagues dans un spectacle immersif de son surround. Le vent &agrave; travers les aiguilles de pin cr&eacute;era le visage num&eacute;rique d'un pin.</p>\r\n<p></p>\r\n<p></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 18,
                "name": "Digital arts",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1871,
                "name": "Digital moving image",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 206,
                "name": "Interactive real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 917,
                "name": "sound art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 54690,
            "forum_user": {
                "id": 54628,
                "user": 54690,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/57a3a89c4a82f589836a7214af29d0d4?s=120&d=retro",
                "biography": "She is a multi-media, mixed media artist. Her works mainly involve visual interaction, sound interaction, and moving image. She is currently studying at the Royal College of Art.",
                "date_modified": "2024-03-16T18:00:27.794298+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jlddy",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "i-suppose-he-will-miss-the-sea-too",
        "pk": 2809,
        "published": true,
        "publish_date": "2024-03-06T15:18:14+01:00"
    },
    {
        "title": "Elegy - Jinyu Fang, Yunsheng Zhu, Yutong Chai, Tairan Shi",
        "description": "À cette époque, nous observons un phénomène où les gens nuisent continuellement à l’environnement naturel, tout en exprimant en même temps leur admiration pour les paysages simulés artificiellement. Cette situation apparemment contradictoire révèle un problème profond, soulignant le vaste écart entre nos actions destructrices et nos idéaux pour la nature. Alors que nous explorons la technologie, la science et la créativité, nous négligeons souvent les graves défis auxquels notre planète est confrontée.Équipe de projet : Jinyu Fang, Yunsheng Zhu, Tairan Shi, Yutong Chai",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;Jinyu Fang, Yunsheng Zhu, Yutong Chai, Tairan Shi<br /><a href=\"https://forum.ircam.fr/profile/cygnuschai/\">Biographie&nbsp;Yutong Chai<br /></a><a href=\"https://forum.ircam.fr/profile/jinyufang/\">Biographie Jinyu Fang</a></p>\r\n<p></p>\r\n<p>Notre objectif est de cr&eacute;er un espace narratif immersif, en utilisant la r&eacute;alit&eacute; virtuelle comme support pour discuter de ce ph&eacute;nom&egrave;ne. Notre objectif est d'&eacute;voquer des souvenirs de paysages naturels qui ont &eacute;t&eacute; endommag&eacute;s en transmettant la beaut&eacute; inh&eacute;rente au bruit des d&eacute;chets, inspirant ainsi les gens &agrave; r&eacute;fl&eacute;chir &agrave; l'impact de nos modes de vie sur la Terre. Ces sons nous rappellent que nous devons &ecirc;tre responsables de nos actes. Au fur et &agrave; mesure que le r&eacute;cit se d&eacute;roule, le public entrera dans l'ann&eacute;e 3030 &agrave; la premi&egrave;re personne, o&ugrave; les sons naturels ont disparu, et participera &agrave; la restauration de ces sons. Dans notre monde virtuel cr&eacute;&eacute;, chaque son devient une note lugubre, un po&egrave;me sur la beaut&eacute; de la nature et la douleur de sa destruction. Nous esp&eacute;rons &eacute;veiller la conscience des gens et les motiver &agrave; r&eacute;examiner leur lien avec le monde naturel, en nous effor&ccedil;ant de mieux prot&eacute;ger notre maison commune.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c02c83c1e77f6d5abf1f748add5590ee.jpg\" /></p>\r\n<p>Techniquement, nous avons utilis&eacute; Blender et Cinema 4D pour la mod&eacute;lisation 3D, Unreal Engine 5 pour la cr&eacute;ation de sc&egrave;nes et la conception d'interactions, et Reaper, Adobe Audition et Fl Studio pour la conception sonore.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/770c6395a3ef29ee40f3de6486445a42.jpg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1b3645602ed51b749ea1c92bec37042c.jpg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/48a72ba59c82de28fdb26f3dca16ef6f.jpg\" /></p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/205846ba3260711618260342640fed37.png\" /></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 66276,
            "forum_user": {
                "id": 66206,
                "user": 66276,
                "first_name": "Yutong",
                "last_name": "Chai",
                "avatar": "https://forum.ircam.fr/media/avatars/EB095CEB-ACD9-4BE9-9337-E95120194593.jpeg",
                "avatar_url": "/media/cache/6b/be/6bbe7722cb6666db7982acce138fcb1b.jpg",
                "biography": "Yutong Chai is currently pursuing a postgraduate programme in Digital Direction at the Royal College of Art. Her work is based on virtual scenes and experimental images that explore non-traditional narratives. She focuses on the relationship between society and people, using different media to create multi-sensory art experiences. Prompts the audience to think about and discuss the theme.",
                "date_modified": "2024-04-06T23:03:03.104004+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cygnuschai",
            "first_name": "Yutong",
            "last_name": "Chai",
            "bookmarks": []
        },
        "slug": "elegy-1",
        "pk": 2823,
        "published": true,
        "publish_date": "2024-03-10T23:47:35+01:00"
    },
    {
        "title": "Concert augmenté - Giovanni Montiani Noa Mick & Mezzo Forte",
        "description": "Une expérience d'écoute immersive avec un système de diffusion binaurale via un casque à conduction osseuse (BCH).",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par :&nbsp;Giovanni Montiani &amp; Mezzo Forte<br /><a href=\"https://forum.ircam.fr/profile/giovannimontiani/\">Biographie<br /><br /></a>a-live concert augment&eacute; par Mezzo Forte.&nbsp;<br /><br />L'exp&eacute;rience d'&eacute;coute comprend un syst&egrave;me de diffusion binaurale via un casque &agrave; conduction osseuse (BCH), respectant et fusionnant avec l'environnement et les instruments acoustiques.&nbsp;</p>\r\n<p>La performance propos&eacute;e est <em>Ta beaut&eacute;, exig&eacute;e par le monde futur</em> de Giovanni Montiani. En combinant les couches sonores gr&acirc;ce &agrave; l'utilisation du SPAT, l'environnement auditif est augment&eacute; : les sons sont r&eacute;partis dans l'espace, les rendant extr&ecirc;mement proches ou &eacute;loign&eacute;s de l'auditeur, amplifiant les sensations associ&eacute;es &agrave; la perception d'espaces intimes ou &eacute;tendus.&nbsp;</p>\r\n<p><b>Saxophone baryton : Noa Mick</b></p>\r\n<p><b></b></p>\r\n<p>&nbsp;<strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 621,
                "name": "3daudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1097,
                "name": "mixed music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 23362,
            "forum_user": {
                "id": 23344,
                "user": 23362,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/cd5aeb812530e8c7883a574fc8b804d7?s=120&d=retro",
                "biography": "Giovanni Montiani studied composition at the Conservatory of Florence and then at the CRR in Strasbourg with Daniel D'Adamo and Tom Mays. He is currently enrolled in the Master CIM programme at the Haute École des Arts du Rhin. His exchanges with Francesco Rizzi, Lara Morciano, José Miguel Fernández and Stefano Gervasoni, as well as his studies of electronic music with Marco Liuni, have been crucial to his career.\nHe has combined the study of music with that of literature, obtaining a degree in\nin Modern Literature at the University of Florence. Along with Giulia Lorusso and Mathieu Corajod, he is a member of the CUE Creative Union Experience composers' collective, which researches in the field of collective composition. He has been selected to attend Ircam's Cursus of Composition and Computer Music 2023-2024. His music has been performed by ensembles and soloists such as Accroche Note, Divertimento Ensemble, Ensemble Linea, Noa Mick and Sami Bounechada.",
                "date_modified": "2025-12-15T11:29:03.784655+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 505,
                        "forum_user": 23344,
                        "date_start": "2021-11-12",
                        "date_end": "2025-10-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 126,
                                "membership": 505
                            },
                            {
                                "id": 493,
                                "membership": 505
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "giovannimontiani",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "augmented-concert-giovanni-montiani-mezzo-forte",
        "pk": 2762,
        "published": true,
        "publish_date": "2024-02-21T11:00:29+01:00"
    },
    {
        "title": "Chuchotements de l'amour de l'été : Une proposition de conte sonique - Xiangefng Xu",
        "description": "Une chanson qui a raconté toute ma nuit.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par :&nbsp;Xiangefng Xu&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/finnxu/\">Biographie</a></p>\r\n<p></p>\r\n<p>Dans ce projet, j'ai l'intention de documenter une journ&eacute;e de ma vie, en me concentrant plus particuli&egrave;rement sur les moments que je passe &agrave; attendre avec impatience les messages d'un &ecirc;tre cher. Je cherche &agrave; capturer l'essence de ce voyage &eacute;motionnel par le biais du son, en utilisant une gamme vari&eacute;e de timbres et de m&eacute;lodies. Chaque timbre et m&eacute;lodie servira de repr&eacute;sentation sonore des diff&eacute;rents &eacute;tats &eacute;motionnels et des changements que j'exp&eacute;rimente tout au long de la journ&eacute;e. En transformant ces moments de la vie r&eacute;elle en musique, j'ai l'intention de cr&eacute;er un r&eacute;cit audio profond qui r&eacute;sonne avec le langage universel de l'&eacute;motion et la tapisserie complexe de la connexion humaine.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1866,
                "name": "sonic",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1867,
                "name": "storytelling",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 55033,
            "forum_user": {
                "id": 54971,
                "user": 55033,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4bea1f192740c01ea4518fd11727c9f4?s=120&d=retro",
                "biography": "I am a multifaceted creative professional currently pursuing my design journey at the prestigious Royal College of Art. My roots in design run deep, and I have dedicated my academic pursuits to honing my skills at the Royal College of Art, where I am constantly inspired by innovation and artistic exploration. Beyond the world of design, my love for sound has led me to create immersive auditory experiences that resonate with both personal and universal emotions. Drawing from my experiences as a DJ, I have a keen ear for crafting dynamic soundscapes that captivate and engage. This synthesis of design and music has allowed me to approach creative projects from a unique perspective, seamlessly weaving aesthetics and emotion into my work.",
                "date_modified": "2024-04-07T23:16:36.631551+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "finnxu",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "fall",
        "pk": 2798,
        "published": true,
        "publish_date": "2024-03-04T18:07:32+01:00"
    },
    {
        "title": "Velocity Bounce and Ghidorina by Helen Bledsoe",
        "description": "This is a brief abstract of my presentation for the IRCAM Forum in Latvia 2025 presenting two new works of mine that integrate Somax II and a RAVE models (one of which is my flute playing) in combination with either live performance or live visuals using A-life simulations with Tölvera",
        "content": "<p>↩&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">Back to IRCAM Forum Workshops Rīga-Liepāja (Latvia)</a></p>\r\n<p style=\"margin-bottom: 0in;\">Abstract: In this demo I will show concert or video documentation and present the concepts about two works that use Rave and Somax:</p>\r\n<p style=\"margin-bottom: 0in;\">1) \"Poem for G\" for voice and alto flute, which will be premiered in Maastricht 20. September, shows a live implementation of these technologies. They build a framework for a three-layered interplay of flute sounds, text (derived from Anne Sexon's \"O Ye Tongues\"), and synthesized sounds (wavetable synthesis and pvocoder).</p>\r\n<p style=\"margin-bottom: 0in;\">&nbsp;</p>\r\n<p style=\"margin-bottom: 0in;\">The second work \"Velocity Bounce\" is an audio/visual short installation featuring Somax and Rave in conjunction with visuals produced by T&ouml;lvera, a python library which uses algorithms from A-life (flocking, swarming, growth) to simulate particle behavior. The data from these particles is used to navigate latent spaces in Rave models, which then under certain circumstances will engage with Somax players.</p>\r\n<p style=\"margin-bottom: 0in;\"></p>\r\n<p style=\"margin-bottom: 0in;\"><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 1825,
                "name": "a-life",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 779,
                "name": "RAVE",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1199,
                "name": "Somax2",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3195,
                "name": "Tolvera",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 14895,
            "forum_user": {
                "id": 14892,
                "user": 14895,
                "first_name": "Helen",
                "last_name": "Bledsoe",
                "avatar": "https://forum.ircam.fr/media/avatars/Bledsoe_1.png",
                "avatar_url": "/media/cache/d0/03/d003c24fc9f49a926461b290796e9c30.jpg",
                "biography": null,
                "date_modified": "2025-10-26T20:29:06.188063+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "cloudchamber",
            "first_name": "Helen",
            "last_name": "Bledsoe",
            "bookmarks": []
        },
        "slug": "velocity-bounce-and-ghidorina",
        "pk": 3579,
        "published": true,
        "publish_date": "2025-07-23T17:46:32+02:00"
    },
    {
        "title": "Go to the park - Ruohong Chen",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>This project is a poetic discussion about the relationship between people, parks, and space. Whenever I go to the park, as I look around, I always wonder why people go to the park? What do parks really mean to us? What do people feel in the park? Are there any similarities? How would people think if I presented my feelings about Hyde Park in a different form in another place? I will use video, sound and projection to present my experience of Hyde Park in a place other than Hyde Park, giving the audience an immersive experience</p>",
        "topics": [],
        "user": {
            "pk": 33006,
            "forum_user": {
                "id": 32958,
                "user": 33006,
                "first_name": "Ruohong",
                "last_name": "Chen",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1b675b34de06c95d5efa8ea2afb461f0?s=120&d=retro",
                "biography": "Ruohong Chen, Feb, 1997. Born in Chongqing, China. Art student. \n\nHer work begins with her own, through her observations, engaging with unnoticeable problems that exist in the commonplace and which cause crucial social issues. She works mainly through photography, illustration and video to make those questions, emotions and thoughts visible and audible.",
                "date_modified": "2023-02-07T17:32:20+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ruohongchen",
            "first_name": "Ruohong",
            "last_name": "Chen",
            "bookmarks": []
        },
        "slug": "go-to-the-park",
        "pk": 2093,
        "published": true,
        "publish_date": "2023-02-28T17:06:25+01:00"
    },
    {
        "title": "Mute - Paul Baule",
        "description": "\"Un grand silence s'étend sur le monde naturel alors que le bruit de l'homme devient assourdissant. - Bernie Krause",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p>Pr&eacute;sent&eacute; par: Paul Baule<br /><a href=\"https://forum.ircam.fr/profile/paulbaule/\">Biography</a></p>\r\n<p>Mute (WT) est un prototype d'installation interactive qui d&eacute;peint une r&eacute;alit&eacute; choquante sur l'&eacute;tat de notre plan&egrave;te : le vaste orchestre de la vie, le paysage sonore du monde naturel, est en train de se taire.</p>\r\n<p>Le public est invit&eacute; &agrave; s'engager dans une visualisation de donn&eacute;es \"vivantes\", montrant une vol&eacute;e &eacute;mergente de 17 000 formes d'ondes audio \"volantes\" qui repr&eacute;sentent statistiquement 17 des esp&egrave;ces d'oiseaux chanteurs les plus menac&eacute;es du Royaume-Uni. Bas&eacute; sur l'&eacute;volution de la population de ces esp&egrave;ces entre 1967 et 2020, le projet rend visible et audible la fa&ccedil;on dont le paysage sonore des chants d'oiseaux au Royaume-Uni a radicalement chang&eacute; au fil du temps, comment une vol&eacute;e initialement diversifi&eacute;e, bruyante et vivante est devenue de plus en plus homog&egrave;ne et silencieuse.</p>\r\n<p>En soulignant la fragilit&eacute; des cordes vocales de la nature, menac&eacute;es par une destruction humaine sans pr&eacute;c&eacute;dent, il vise &agrave; cr&eacute;er une passerelle &eacute;motionnelle vers les statistiques affligeantes mais inaccessibles de la perte de biodiversit&eacute; et tente de contribuer &agrave; un avenir digne d'&ecirc;tre v&eacute;cu pour toutes les esp&egrave;ces.</p>\r\n<p><br /><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></p>",
        "topics": [
            {
                "id": 1905,
                "name": "3D Animation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1907,
                "name": "Biodiversity Loss",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1906,
                "name": "Birdsongs",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1904,
                "name": "Data Visualisation",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 815,
                "name": "soundscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 54964,
            "forum_user": {
                "id": 54902,
                "user": 54964,
                "first_name": "Paul",
                "last_name": "Baule",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/7ea4b9838c9ff88f706d58ba491103cd?s=120&d=retro",
                "biography": "Paul is a creator in the broadest sense. Combining graphics, music, text, film, concept and strategy, he aims to leverage the full potential of the arts in tackling the systemic challenges of our time. After finishing his Bachelor's degree in Social and Economic Communication at the University of the Arts Berlin he conceptualised, designed and executed complex cross-media projects on topics like futurology, climate action and sustainable development with partners like Greenpeace, the German Institute for Human Rights, Climate Analytics, GermanZero and Client Earth. His most recent project: a multi-format campaign on the rise of climate litigation, contributing to the first ever climate case before the International Court Justice. Collaborating with scientists, philosophers, activists and various creatives deepened Paul’s dedication to purpose-driven storytelling and transformative change making. In a decade that will decide the centuries to come, he aims to unite his broad creative skills and interests in immersive projects that go to the root, that make us realise, rethink and reconnect, and thus contribute to the far-reaching social transformation, that is so urgently needed. Pau",
                "date_modified": "2024-03-17T09:19:14.561666+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "paulbaule",
            "first_name": "Paul",
            "last_name": "Baule",
            "bookmarks": []
        },
        "slug": "passeri-polyphony",
        "pk": 2836,
        "published": true,
        "publish_date": "2024-03-17T09:32:51+01:00"
    },
    {
        "title": "OpenMusic News by Karim Haddad & Carlos Agon",
        "description": "Karim Haddad presents OpenMusic 8.0 latest features, improvements, and bug fixes. Some new OM libraries will also be presented with the contribution of Carlos Agon.",
        "content": "<p><strong><strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p><strong><strong><img src=\"/media/uploads/ban_openmusic-384x157.png\" alt=\"\" width=\"384\" height=\"157\" /></strong></strong></p>",
        "topics": [
            {
                "id": 954,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 14,
            "forum_user": {
                "id": 14,
                "user": 14,
                "first_name": "Karim",
                "last_name": "Haddad",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1f556229c0742ef0586dd43d312f81a4?s=120&d=retro",
                "biography": "Karim Haddad was born in 1962 in Beirut Lebanon. He studied at the national conservatory there until it closed its doors in 1975 due to the civil war. He then went on to study philosophy and literature. Haddad received six awards from the CNSMD de Paris in addition to the Diplôme Supérieur de Composition with honors. He has worked with composers such as A.Bancquart, P. Mefano, K. Huber, and Emmanuel Nunes. This learning period is marked by his keen interest for non-tempered spaces and their strong relationship with temporal poetry. In 1992 and 1994 he took part in Ferienkursen für Musik in Darmstadt where he received a scholarship. In 1995, he took a class in computer music at IRCAM, and from that point on, the computer became the only tool he used for the elaboration of his works.\r\n\r\nAs a computer music expert, and more particularly as an expert in computer-assisted composition, in 2000 he is given the responsibility of technical support for the IRCAM Forum. He has developed several tools for the OpenMusic environment (synthesis control via Csound), as well as interfaces between this environment and score editors such as Finale and Lilypond.",
                "date_modified": "2026-02-18T11:08:17.096351+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 3,
                        "forum_user": 14,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 0,
                        "keys": [
                            {
                                "id": 544,
                                "membership": 3
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "haddad",
            "first_name": "Karim",
            "last_name": "Haddad",
            "bookmarks": []
        },
        "slug": "openmusic-news",
        "pk": 4382,
        "published": true,
        "publish_date": "2026-02-18T11:12:02+01:00"
    },
    {
        "title": "La conscience dans le paysage - Artemis Weng, Bethan Hancock",
        "description": "Bethan Hancock et Artemis Zih-Jie Weng sont un groupe de recherche basé à Londres qui s'intéresse au lien entre la conscience humaine et les environnements en évolution - en particulier ceux qui sont liminaires et métaphoriquement en évolution. Le projet actuel Consciousness in The Landscape explore les concepts de subconscience et de curiosité humaines lorsqu'ils sont placés dans des environnements reflétant l'Anthropocène.",
        "content": "<p><span><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par:&nbsp;Artemis Weng &amp;&nbsp;Bethan Hancock<br /><a href=\"https://forum.ircam.fr/profile/artemisweng/\">Biographie Artemis Weng</a>&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/beth4n/\">Biographie&nbsp;Bethan Hancock</a><br /></span></p>\r\n<p><span></span></p>\r\n<p>Ce projet sp&eacute;culatif explore les interactions entre la conscience humaine et sa r&eacute;action &agrave; diff&eacute;rents environnements, en enregistrant les mouvements de l'individu par le biais de vibrations sonores. Le format est un jeu interactif produit &agrave; l'aide d'un logiciel de jeu du syst&egrave;me VR ; le joueur explore un itin&eacute;raire &agrave; travers un paysage forestier qui &eacute;volue progressivement vers un paysage urbain en utilisant l'imagerie de l'Anthropoc&egrave;ne. Un d&eacute;tail cl&eacute; de ce projet est l'exp&eacute;rience sonore entre le joueur et le public ; le joueur fait l'exp&eacute;rience de sons naturels qui sont lentement manipul&eacute;s/augment&eacute;s par des donn&eacute;es sensorielles g&eacute;n&eacute;r&eacute;es par ordinateur - des sons tels que le ruissellement de l'eau dans le lit d'une rivi&egrave;re, le sifflement du vent dans les branches des arbres et les cr&eacute;atures qui se faufilent sur le sol d'une for&ecirc;t feuillue. Inspir&eacute; par des films comme Annihilation (2018) o&ugrave; la nature &eacute;volue au fur et &agrave; mesure que les explorateurs traversent le paysage. En revanche, le public &agrave; l'ext&eacute;rieur du casque entend une performance de donn&eacute;es sonores d&eacute;riv&eacute;es des mouvements de la manette des joueurs.</p>\r\n<p>En traversant la for&ecirc;t, le joueur utilise un contr&ocirc;leur pour interagir avec l'environnement. Lorsqu'il r&eacute;agit au paysage &agrave; l'aide du contr&ocirc;leur, celui-ci produit des sons qui d&eacute;pendent des mouvements sp&eacute;cifiques du joueur : par exemple, s'il pousse vers l'avant avec le joystick gauche, il y aura un bourdonnement/vibration sp&eacute;cifique, mais si quelque chose sur son chemin l'am&egrave;ne &agrave; bouger son joystick, un son secondaire est produit. Ces r&eacute;sultats changeront au fur et &agrave; mesure que le paysage naturel sera affect&eacute; par l'esth&eacute;tique de l'Anthropoc&egrave;ne ; le r&eacute;sultat sera une documentation de donn&eacute;es sonores enregistr&eacute;es et une performance refl&eacute;tant les actions de la curiosit&eacute; humaine et les d&eacute;cisions subconscientes.</p>\r\n<p>Dans le paysage de l'Anthropoc&egrave;ne, les sons deviennent de plus en plus synth&eacute;tiques et artificiels. L'ambisonie joue un r&ocirc;le crucial dans cette exp&eacute;rience, encourageant le joueur &agrave; s'&eacute;carter du chemin lorsque c'est possible. Il explore les fa&ccedil;ons dont la curiosit&eacute; humaine interagit avec diff&eacute;rents environnements au fur et &agrave; mesure qu'ils &eacute;voluent, en produisant des sons exp&eacute;rimentaux qui refl&egrave;tent le climat. Les environnements contrast&eacute;s permettent une comparaison fascinante des r&eacute;sultats, car chaque passage est unique. Les donn&eacute;es sonores sont repr&eacute;sent&eacute;es en montrant les vibrations associ&eacute;es &agrave; ces sons afin de transmettre une exp&eacute;rience multisensorielle.</p>\r\n<p>Le projet sera pr&eacute;sent&eacute; avec une d&eacute;mo de la simulation de jeu jou&eacute;e par un individu tandis que le public fera l'exp&eacute;rience de la performance de la documentation des donn&eacute;es sonores qui sera transmise sur un second &eacute;cran.</p>\r\n<p></p>\r\n<p><span><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement</a></strong></span></p>",
        "topics": [
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 53457,
            "forum_user": {
                "id": 53395,
                "user": 53457,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_0656.jpeg",
                "avatar_url": "/media/cache/c3/27/c32751f5183a9ac3a8e2ebe56c597d5a.jpg",
                "biography": "Artemis (Zih-Jie) Weng is an experimental new media artist & creative technologist working across multiple avenues of immersive storytelling. She is driven by a passion for sound and music, world-building, engaging experiences and simulations. She is currently interested in dreams, realms, and social justice while constantly exploring new mediums.",
                "date_modified": "2024-04-05T17:33:57.317013+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "artemisweng",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2779,
                    "user": 53457,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 119,
                    "user": 53457,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "consciousness-in-the-landscape",
        "pk": 2779,
        "published": true,
        "publish_date": "2024-03-01T19:29:48+01:00"
    },
    {
        "title": "An Adaptive Acoustic Software for Instrumental Music for can be tangible use in Music Hardwares, Products and Accessories by Arnab Dalal",
        "description": "This Projects presents an Adaptive Psychoacoustic Model designed to process and tune audio data for high-fidelity instrumental music, which contains no lyrical attributes. The approach includes: 1. Audio Extraction 2. EQ Techniques and Psychoacoustic Models 3. Adaptive Audio Codec with AI Integration",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\"></div>\r\n<p><strong>Introduction:</strong></p>\r\n<p>It's no wonder that today, nearly <strong>60-70%</strong> of the music humanity has ever created and experienced falls under the instrumental category. Despite this there is <strong>no dedicated acoustic software</strong> or codec designed to enhance the <strong>instrumental music listening experience.&nbsp;</strong></p>\r\n<p>As music evolves, we're seeing a <strong>shift towards a deeper appreciation of Instrumental Sound</strong>. Whether it's the soothing rhythms of wellness music or pulsating beats of hard techno, it's no doubt that instrumental music plays a pivotal role. It allows us to experience the raw essence of sound--uninterrupted by lyrics. Lyrics can sometimes impose the songwriter's emotions and narrative onto the listener. Instrumental music on the other hand gives space for personal interpretation, inviting the listener to connect with music in it's purest form.&nbsp;</p>\r\n<p>Studies show that listening to instrumental music can enhance congnitve function, creativity and focus. There is also evidence that professional pianists are much better than non-musicians at discriminating two closely separated points, perhaps from years of sight reading. They also improved faster with practice, suggesting that <strong>music makes brains more plastic</strong> in general. <strong>Learn an instrument, then, and it might get easier to learn everything else</strong>.&nbsp;</p>\r\n<p><img alt=\"equal-loudness contour\" src=\"https://forum.ircam.fr/media/uploads/user/4fecd70a4a8b952d8954022c1aaea514.jpeg\" width=\"801\" height=\"616\" /></p>\r\n<p><img alt=\"Frequency Response Matrix (Heatmap)\" src=\"https://forum.ircam.fr/media/uploads/user/33e3cd71652c8170a039eedac921edec.png\" width=\"800\" height=\"502\" /></p>\r\n<p>The X-axis represents different frequency bands (in Hz) on a logarithmic scale. The Y-axis shows amplitude levels in decibels (dB). The Colours represent the amplitude at specific ranges, with warmer colors indicating higher amplitudes (closer to 0 dB) and cooler colors representing lower amplitudes. <strong>The visualisation helps to quickly identify how different bands are affected with an overview of the overall response.&nbsp;</strong></p>\r\n<p><strong>That's what we're here to change: Our Research and Approach:</strong></p>\r\n<p>At present, we're developing a codec that optimises the<strong> 2-4 kHz range</strong>--<strong>the sweet spot for human voice frequency spectrum--but reimagined for instrumental music</strong>. Our goal is to <strong>enrich this range, giving listeners a more immersive and refined auditory experience</strong>. We've mapped out the key frequency behaviours and analysed how timbre and harmonics contribute of Instrumental Sound. Here's an overview of our process</p>\r\n<p><strong>1. Step One: Signal Analysis</strong></p>\r\n<p>We start by analysing the audio data. This allows us to tailor the listening experience to optimise the specific characteristics of music. (References Below)</p>\r\n<p><img alt=\"Chromagram\" src=\"https://forum.ircam.fr/media/uploads/user/cfefac63a1afa67c80b08be1072d8bc7.png\" width=\"767\" height=\"574\" /></p>\r\n<p><img alt=\"Key Strength\" src=\"https://forum.ircam.fr/media/uploads/user/3834dac5344ad7e086a9fc1406c4175a.png\" width=\"731\" height=\"585\" /></p>\r\n<p>&nbsp;</p>\r\n<p><strong>2. Step Two: Properitary Proessing Algorithms</strong></p>\r\n<p>Using our provisional-patented algorithms, we apply cutting-edge processing techniques to optimise the voice frequency range and elevate the listener's experience of instrumental sound.&nbsp;</p>\r\n<p><img alt=\"Frequency Response\" src=\"https://forum.ircam.fr/media/uploads/user/5afc05042f2fa2d3a2d76db52253a722.png\" width=\"751\" height=\"448\" /></p>\r\n<p><strong>3. Step Three: AI Integration</strong></p>\r\n<p>Finally, we incorporate AI to refine the sound. Since not all music is the same, this step allows us to fine tune the audio data and make adjustments to each indiviual track.&nbsp;</p>\r\n<p><img alt=\"Frequency Response\" src=\"https://forum.ircam.fr/media/uploads/user/cb9d29821ac354374d39ca9adc977605.png\" width=\"740\" height=\"403\" /></p>\r\n<p><strong>Conclusion:</strong></p>\r\n<p>Our approach has a special emphasis on music creators, including Artists, Collaborators, Sound Designers and Record Labels working with genres like Ambient, Classical, Orchestral Music, Film Scores, and a vast array of Experimental Music.&nbsp;</p>\r\n<p>Let's experience together how different genres-whether it's ambient, electroacoustic or even DIY instruments--respond to these new enchancements. Let's make this a conversation, not just an article or presentation!&nbsp;</p>",
        "topics": [
            {
                "id": 458,
                "name": "Ambient",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2324,
                "name": "Audio Extraction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2510,
                "name": "Audio Synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2321,
                "name": "digital signal processing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 850,
                "name": "experimental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2322,
                "name": "Psychoacoustic Model",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18336,
            "forum_user": {
                "id": 18329,
                "user": 18336,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/WhatsApp_Image_2024-10-10_at_16.42.11.jpeg",
                "avatar_url": "/media/cache/46/33/463303acd1b38c3ab3ac694a940d675c.jpg",
                "biography": "I'm the Founder/Director of RESET NETWORKS (OPC) PRIVATE LIMITED. We're an experimental culture driven brand with our goal to constantly drive innovation and inspire, thus helping to lead and define the progression of electronic music culture. As a startup recognised & certified under the #startupindia scheme, RESET is an early stage platform for new developments in Music Technology.",
                "date_modified": "2026-01-03T14:56:10.899336+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 978,
                        "forum_user": 18329,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "dalalarnab93",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "an-adaptive-acoustic-software-for-instrumental-music-for-can-be-tangible-use-in-music-hardwares-products-and-accessories",
        "pk": 3197,
        "published": true,
        "publish_date": "2025-01-03T18:33:24+01:00"
    },
    {
        "title": "Sculpter l’espace : re/synthèse spatiale en 3D de structures sonores complexes",
        "description": "Résidence en recherche artistique 2017.18\r\nNúria Giménez-Comas et Marlon Schumacher\r\nEn collaboration avec l'équipe Espaces acoustiques et cognitifs de l’Ircam-STMS et le ZKM.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">R&eacute;sidence en recherche artistique 2017.18</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<p><strong>Sculpter l&rsquo;espace : re/synth&egrave;se spatiale en 3D de structures sonores complexes</strong><br />En collaboration avec l'&eacute;quipe Espaces acoustiques et cognitifs de l&rsquo;Ircam-STMS et le ZKM.</p>\r\n<p>Le projet collaboratif de recherche explore et d&eacute;veloppe la notion de sculpture spatiale en 3D avec un travail sur la synth&egrave;se de textures. Les dimensions de couche et de densit&eacute;, timbrales et spatiales, seront abord&eacute;es comme des entourages synth&eacute;tiques et immersifs. Gr&acirc;ce &agrave; l&rsquo;utilisation des librairies OM-Chroma et OM-Prisma, la compositrice N&uacute;ria Gim&eacute;nez-Comas modifie les connexions existantes et contr&ocirc;le les r&eacute;flexions et les effets de salle. Marlon Schumacher observe les m&eacute;canismes li&eacute;s &agrave; la perception auditive spatiale et &agrave; l&rsquo;analyse de sc&egrave;nes pour d&eacute;velopper des syst&egrave;mes &laquo; intelligents &raquo; dont la d&eacute;corr&eacute;lation/modulation des signaux vis-&agrave;-vis de leurs contenus fr&eacute;quentiels.</p>\r\n<h6></h6>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">N&uacute;ria Gim&eacute;nez-Comas et Marlon Schumacher</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\" style=\"text-align: center;\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202018/.thumbnails/nuria_marlon.jpg/nuria_marlon-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographies</h3>\r\n<p><span>N&uacute;ria Gim&eacute;nez-Comas &eacute;tudie le piano puis la composition &agrave; Barcelone. Elle se forme aupr&egrave;s de Christophe Havel qui la confronte d'embl&eacute;e &agrave; l'&eacute;lectroacoustique pure et &agrave; l'importance du travail du timbre &mdash; &agrave; la fois au niveau de l'expansion timbrique et harmonique, de la coh&eacute;sion timbre/harmonie ou de l'interaction de l'informatique et de l&rsquo;instrumentiste &mdash; et de Maurio Sotelo, avec qui elle travaille l'architecture formelle.</span>&nbsp;Elle poursuit ses &eacute;tudes &agrave; la Haute &Eacute;cole de musique de Gen&egrave;ve avec Michael Jarrell, Luis Na&oacute;n et &Eacute;ric Daubresse. Dans le cadre de son m&eacute;moire de master, elle axe sa recherche sur certains aspects de la perception sonore et d&eacute;veloppe sa r&eacute;flexion sur les concepts d'image sonore et de masquage. A<span>Attir&eacute;e par le travail des images et le pluridisciplinaire, elle participe en 2012 &agrave; l'atelier <em>In Vivo-Video</em> de l'Acad&eacute;mie ManiFeste <u>et suit</u> le Cursus de composition et d&rsquo;informatique musicale de l'Ircam (cursus 1 &amp; 2). Elle y r&eacute;alise des projets sur la synth&egrave;se par mod&egrave;les physiques et un projet sur les sc&egrave;nes sonores par l&rsquo;utilisation du syst&egrave;me de spatialisation en 3D Ambisonics.</span>&nbsp;Laur&eacute;ate de nombreux concours dont le prix Colegio de Espa&ntilde;a (Paris) &ndash; INAEM 2012 et le premier prix du concours International Edison-Denisov, ces pi&egrave;ces ont &eacute;t&eacute; jou&eacute;es par des interpr&egrave;tes de renom comme le Quatuor Diotima, le Brussels Philarmonic, l'Ensemble Contrechamps et le trio du Klangforum Wien.</p>\r\n<p>Marlon Schumacher &eacute;tudie la musicologie et la philosophie &agrave; l'universit&eacute; Eberhard-Karls de T&uuml;bingen. Il est dipl&ocirc;m&eacute; de la HMDK de Stuttgart en th&eacute;orie musicale, m&eacute;dias num&eacute;riques et composition, et docteur en technologie musicale de l&rsquo;universit&eacute; McGill. Chercheur et conf&eacute;rencier en synth&egrave;se sonore spatiale, composition assist&eacute;e par ordinateur et interfaces musicales, ses contributions dans ces domaines sont vari&eacute;es : publications, pr&eacute;sentations, ateliers, plusieurs logiciels open source et projets art-science. Il est membre permanent du comit&eacute; de la conf&eacute;rence MuSA. Il y donne son expertise dans l'attribution de bourses de recherche et est impliqu&eacute; comme critique scientifique pour des conf&eacute;rences sur l'informatique musicale et les instruments num&eacute;riques telles que NIME et l&rsquo;ICMC. En 2017, Marlon Schumacher est nomm&eacute; professeur de musique informatique &agrave; l'Institut f&uuml;r Musikwissenschaft und Musikinformatik ainsi que directeur du ComputerStudio &agrave; la Hochschule f&uuml;r Musik de Karlsruhe.</p>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://www.nuriagimenezcomas.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.nuriagimenezcomas.com/</a></li>\r\n<li><a href=\"http://www.music.mcgill.ca/marlonschumacher/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.music.mcgill.ca/marlonschumacher/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 2,
                "name": "MaxMSP",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sculpter-lespace-resynthese-spatiale-en-3d-de-structures-sonores-complexes",
        "pk": 18,
        "published": true,
        "publish_date": "2019-03-21T11:53:15+01:00"
    },
    {
        "title": "Attaque de poésie 01 - Joenio Marques da Costa, Mari Moura",
        "description": "La performance est composée d'art sonore, d'art de la performance, de synthétiseurs, d'échantillons sonores et de codage en direct à l'aide d'outils logiciels libres, tels que Sonic Pi, TidalCycles, Super Collider, Le Biniou, PureData, dublang, OBS et d'autres outils, y compris non seulement les outils de codage en direct, mais aussi tout outil disponible via des interfaces de ligne de commande textuelle. Le live coding est une pratique de création artistique et une communauté de construction et d'utilisation de la technologie basée sur de nouveaux arrangements non utilitaires pour la production de la technologie. La pratique du codage en direct peut être considérée comme un moyen de créer des œuvres sonores et/ou visuelles à l'aide de langages de programmation.",
        "content": "<p><a href=\"https://poetryattack.4two.art/poetryattack01-iclc23.jpg\"></a></p>\r\n<p><a href=\"https://poetryattack.4two.art/poetryattack01-iclc23.jpg\"><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></a><br />Pr&eacute;sent&eacute;&nbsp;par : Joenio Marques da Costa, Mari Moura&nbsp;<br /><a href=\"https://forum.ircam.fr/profile/joenio/\">Biographie Joenio Marques da Costa<br /></a><a href=\"https://forum.ircam.fr/profile/marimoura/\">Biographie Mari Moura&nbsp;</a></p>\r\n<p></p>\r\n<p><a href=\"https://poetryattack.4two.art/poetryattack01-iclc23.jpg\"><img src=\"https://poetryattack.4two.art/poetryattack01-iclc23.jpg\" /></a></p>\r\n<blockquote>\r\n<p>Combien de femmes y a-t-il ici ?<br />Combien de Noirs y a-t-il ici ?<br />Combien y a-t-il de femmes noires ?</p>\r\n</blockquote>\r\n<p>En ces temps de d&eacute;placement, il est n&eacute;cessaire d'y r&eacute;fl&eacute;chir. Dans cette performance, les artistes Mari Moura et Joenio M. Costa m&egrave;nent une attaque po&eacute;tique en utilisant des techniques de codage en direct, l'&eacute;criture de textes et des mouvements corporels pour remettre en question l'absence des femmes, des Noirs et des femmes noires dans les espaces d'art et de technologie. L'absence des femmes, des Noirs et des femmes noires fait partie d'un syst&egrave;me syst&eacute;mique raciste qui favorise l'exclusion. L'acte de performance est un appel &agrave; la r&eacute;flexion sur les r&ocirc;les et les possibilit&eacute;s de changement.</p>\r\n<p>La performance est compos&eacute;e d'art sonore, d'art de la performance, de synth&eacute;tiseurs, d'&eacute;chantillons sonores et de codage en direct &agrave; l'aide d'outils logiciels libres, tels que Sonic Pi, TidalCycles, Super Collider, Le Biniou, PureData, dublang, OBS et d'autres outils, y compris non seulement les outils de codage en direct, mais aussi tout outil disponible via des interfaces de ligne de commande textuelle. Le live coding est une pratique de cr&eacute;ation artistique et une communaut&eacute; de construction et d'utilisation de la technologie bas&eacute;e sur de nouveaux arrangements non utilitaires pour la production de la technologie. La pratique du codage en direct peut &ecirc;tre consid&eacute;r&eacute;e comme un moyen de cr&eacute;er des &oelig;uvres sonores et/ou visuelles &agrave; l'aide de langages de programmation.</p>\r\n<h2 id=\"exhibitions\">Expositions</h2>\r\n<ul>\r\n<li>Algorave Brasil 2022</li>\r\n</ul>\r\n<blockquote>\r\n<p>Pr&eacute;sent&eacute; en ligne &agrave; la conf&eacute;rence annuelle de l'<a href=\"https://algoravebrasil.gitlab.io/eventos/2023/pt\">Algorave Brasil 2023</a>.</p>\r\n</blockquote>\r\n<blockquote>YouTube vid&eacute;o: <a href=\"https://www.youtube.com/watch?v=S0_iRd8Uu1Y&amp;t=4901\">https://www.youtube.com/watch?v=S0_iRd8Uu1Y&amp;t=4901</a></blockquote>\r\n<ul>\r\n<li>ICLC 2023</li>\r\n</ul>\r\n<blockquote>\r\n<p>S&eacute;lectionn&eacute; &agrave; ICLC 2023 dans la cat&eacute;gorie&nbsp;<a href=\"https://iclc.toplap.org/2023/catalogue/event/choreographic-coding.html\">Choreographic Coding</a> - ICLC 2023 Catalogue.</p>\r\n</blockquote>\r\n<blockquote>YouTube Vid&eacute;o: <a href=\"https://www.youtube.com/watch?v=n9zxAhXduH4\">https://www.youtube.com/watch?v=n9zxAhXduH4</a></blockquote>\r\n<ul>\r\n<li>Live Coding Brasil: Turn&ecirc; 2023</li>\r\n</ul>\r\n<blockquote>\r\n<p>Pr&eacute;sentation en direct dans certaines villes br&eacute;siliennes&nbsp;lors de la&nbsp;<a href=\"https://tour23.4two.art/\">Live Coding Brasil: Turn&ecirc; 2023</a>.</p>\r\n</blockquote>\r\n<blockquote>YouTube Vid&eacute;o: <a href=\"https://www.youtube.com/watch?v=x2z6W7N-Qvo&amp;t=922\">https://www.youtube.com/watch?v=x2z6W7N-Qvo&amp;t=922</a></blockquote>\r\n<h2 id=\"authors\">Auteurs</h2>\r\n<ul>\r\n<li>Mari Moura - <a href=\"https://marimoura.4two.art\">marimoura.4two.art</a></li>\r\n<li>Joenio Marques da Costa - <a href=\"https://joenio.me\">joenio.me</a></li>\r\n</ul>\r\n<p>Mari Moura et Joenio M Costa exp&eacute;rimentent le live coding + body performance depuis un certain temps maintenant et ils ont atteint un niveau de maturit&eacute; o&ugrave; il y a une sorte de communication pendant les performances de live coding pour sugg&eacute;rer des mouvements corporels &agrave; improviser pendant la performance en direct.</p>\r\n<p><strong>Mari Moura&nbsp;</strong>est une artiste et chercheuse en art de la performance, activiste pour la pr&eacute;sence des femmes noires dans l'art et la technologie. Elle s'int&eacute;resse &agrave; la relation entre l'art, le corps, les mod&egrave;les algorithmiques et la technologie dans l'espace tangible et le cyberespace, ainsi qu'aux arts visuels, au codage en direct et &agrave; la notation corporelle. Elle est titulaire d'un doctorat en arts visuels dans la ligne de recherche Art et Technologie &agrave; l'UNB, avec un doctorat en alternance &agrave; Paris V &agrave; l'Institut des sciences du sport-sant&eacute; de Paris V (I3SP).</p>\r\n<p><strong>Joenio M. da Costa&nbsp;</strong>est un ing&eacute;nieur en logiciel de recherche, un activiste du logiciel libre, un artiste informatique et un musicien exp&eacute;rimental. Il s'int&eacute;resse &agrave; la musique algorithmique, &agrave; l'audiovisuel, &agrave; la d&eacute;mosc&egrave;ne et au codage en direct. Il est titulaire d'une ma&icirc;trise dans le domaine de l'ing&eacute;nierie logicielle et est un expert en durabilit&eacute; des logiciels de recherche. Il est instructeur de The Carpentries, ambassadeur de Software Heritage et contributeur au syst&egrave;me d'exploitation universel Debian. Il cr&eacute;e et maintient son propre outil de codage en direct appel&eacute;&nbsp;<a href=\"https://dublang.4two.art\">dublang</a>.</p>\r\n<h3 id=\"software-used-by-authors\">Logiciels&nbsp;utilis&eacute;s&nbsp;par les auteurs</h3>\r\n<ul>\r\n<li>dublang</li>\r\n<li>Sonic Pi</li>\r\n<li>Tidal Cycles</li>\r\n<li>SuperCollider</li>\r\n<li>Le Biniou</li>\r\n<li>PureData</li>\r\n<li>OpenMusic</li>\r\n<li>eSpeak</li>\r\n<li>Debian</li>\r\n<li><em>and many other invisible free software tools&hellip;</em></li>\r\n<li>TODO (Feyerabend 1975)</li>\r\n</ul>\r\n<h3 id=\"space-and-equipment-required\">Espace&nbsp;et&nbsp;&eacute;quipement&nbsp;n&eacute;cessaires</h3>\r\n<ul>\r\n<li>Hdmi and p2 cables</li>\r\n<li>Sound system</li>\r\n<li>2 projectors</li>\r\n<li>Work table</li>\r\n<li>Chair</li>\r\n<li>1 microphone.</li>\r\n<li>Wifi with internet</li>\r\n<li>Room or hall with free space for body movements</li>\r\n</ul>\r\n<h2 id=\"license\">License</h2>\r\n<p>GPLv3</p>\r\n<h2 id=\"references\">R&eacute;f&eacute;rences</h2>\r\n<p><a href=\"https://joenio.me/poetry-attack-01-iclc-2023\">https://joenio.me/poetry-attack-01-iclc-2023</a></p>\r\n<p><a href=\"https://gitlab.com/joenio/iclc23-poetry-attack-01\">https://gitlab.com/joenio/iclc23-poetry-attack-01</a><a href=\"https://gitlab.com/joenio/iclc23-poetry-attack-01\"></a></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1812,
                "name": "art performance",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1811,
                "name": "body art",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1810,
                "name": "free software",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1809,
                "name": "live coding",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1814,
                "name": "supercollider",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1813,
                "name": "tidalcycles",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 52702,
            "forum_user": {
                "id": 52640,
                "user": 52702,
                "first_name": "Joenio",
                "last_name": "Marques da Costa",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5aff57c7238795d3b159d493ab071d6c?s=120&d=retro",
                "biography": "Joenio M. Costa is computational artist and experimental musician with interest on algorithmic music, audiovisual, demoscene and live coding. He is a PhD candidate in Computer Science at Federal University of Bahia (UFBA), currently working as a Research Software Engineer at CorTexT Platform, Free Software supporter, Debian contributor, The Carpentries instructor, Software Heritage ambassador and co-creator of the laboratory on research, creation and experimentation in arts, science and technology 4two.art (https://4two.art). Author of the dublang multi-language live coding tool (https://dublang.4two.art).",
                "date_modified": "2024-04-01T13:34:36.487616+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "joenio",
            "first_name": "Joenio",
            "last_name": "Marques da Costa",
            "bookmarks": []
        },
        "slug": "poetry-attack-01",
        "pk": 2746,
        "published": true,
        "publish_date": "2024-02-16T14:44:04+01:00"
    },
    {
        "title": "Xp for Live : dernières nouvelles et perspectives d'avenir de l'environnement audio 3D pour Ableton Live (à l'aide de spat~) - Fraction (Eric Raynaud)",
        "description": "Deux ans après sa sortie, Xp for Live (Xp4l), une interface de conception audio 3D pour Ableton utilisant Spat, s'est fermement établie dans une communauté diversifiée et croissante allant des arts médiatiques aux passionnés du son. Lors de la session du Forum Ircam, je ferai une visite et une démonstration de la dernière version, Xp 1.20. De plus, j'explorerai le projet Xp.iko, une version adaptée au haut-parleur emblématique Iko de Sonible, actuellement en cours de collaboration avec Spaes Lab Studio à Berlin et utilisant également la bibliothèque Spat de l'Ircam. Nous examinerons les améliorations futures potentielles de l'outil, offrant un aperçu des développements passionnants qui façonnent l'avenir de Xp.",
        "content": "<p><img src=\"/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par : Fraction (Eric Raynaud)<br /><a href=\"https://forum.ircam.fr/profile/fraction/\">Biographie</a></p>\r\n<p></p>\r\n<p>Cet article offre un aper&ccedil;u d'une pr&eacute;sentation pr&eacute;vue pour le Forum 2024 de l'Ircam, comprenant deux segments distincts :</p>\r\n<p><strong>Xp for Live</strong></p>\r\n<p>La premi&egrave;re partie sera consacr&eacute;e &agrave; la d&eacute;monstration des fonctionnalit&eacute;s de<a href=\"http://www.xp4l.com\">&nbsp;Xp for Live </a>, une interface de conception audio en 3D sp&eacute;cialement con&ccedil;ue pour Ableton Live. Au cours des deux derni&egrave;res ann&eacute;es, Xp a &eacute;t&eacute; largement adopt&eacute; par les cr&eacute;ateurs dans divers domaines, des nouveaux m&eacute;dias &agrave; l'art sonore. L'int&eacute;gration transparente de Spat~ dans l'&eacute;cosyst&egrave;me Ableton a consolid&eacute; sa position d'outil de r&eacute;f&eacute;rence pour les professionnels et les passionn&eacute;s de l'audio. Au cours de cette pr&eacute;sentation, une exploration en profondeur de la derni&egrave;re it&eacute;ration, Xp 1.20, sera men&eacute;e, mettant en &eacute;vidence ses caract&eacute;ristiques am&eacute;lior&eacute;es, ses fonctionnalit&eacute;s, ainsi que ses avantages et ses limites. Les participants obtiendront des informations pr&eacute;cieuses sur les capacit&eacute;s de Xp et son r&ocirc;le dans le fa&ccedil;onnement de l'avenir de la conception audio au sein de l'environnement Ableton Live.</p>\r\n<p><a href=\"https://www.youtube.com/watch?v=oYsRp-lya14&amp;t=1715s\" title=\"Xp 1.20\"><img alt=\"Xp 1.20\" src=\"https://forum.ircam.fr/media/uploads/user/380956743b188a4cd875ee6a1ff82f53.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"1313\" height=\"755\" /></a></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Xp.iko</strong></p>\r\n<p>Le projet Xp.iko repr&eacute;sente un effort m&eacute;ticuleux pour r&eacute;imaginer les outils et les interfaces utilis&eacute;s pour interagir avec le haut-parleur Iko de Sonible dans le cadre de l'architecture &eacute;tablie de Xp pour Ableton Live, en tenant compte de l'histoire des cr&eacute;ations pass&eacute;es, des potentiels et des limites. Il vise &agrave; introduire une nouvelle perspective sur la composition, en attirant non seulement une nouvelle g&eacute;n&eacute;ration de cr&eacute;ateurs mais aussi en invitant les utilisateurs exp&eacute;riment&eacute;s &agrave; red&eacute;couvrir les nuances de la fabrication de cet instrument embl&eacute;matique. Actuellement en cours de d&eacute;veloppement en collaboration avec Spaes Lab Studio &agrave; Berlin et s'appuyant sur la biblioth&egrave;que Spat de l'Ircam, Xp.iko cherche &agrave; faciliter une exploration plus profonde de la conception audio &agrave; l'aide d'Ableton. Les participants auront un aper&ccedil;u des m&eacute;thodologies et du futur flux de travail qui sous-tendent cette initiative et de son potentiel &agrave; fa&ccedil;onner le futur paysage de la composition pour le haut-parleur Iko.</p>\r\n<p><img alt=\"Xp.iko at Spaes Lab\" src=\"https://forum.ircam.fr/media/uploads/user/67bb857174e090013d7777728fde5be4.jpeg\" style=\"display: block; margin-left: auto; margin-right: auto;\" width=\"528\" height=\"528\" /></p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a>&nbsp;</strong></p>",
        "topics": [
            {
                "id": 621,
                "name": "3daudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 207,
                "name": "Ableton",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 453,
                "name": "Ericraynaud",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1838,
                "name": "iko",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 622,
                "name": "Immersiveaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1837,
                "name": "ircamforum",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 450,
                "name": "Ircamspat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 103,
                "name": "MaxforLive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1835,
                "name": "maxmspjitter",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1839,
                "name": "sonible",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 900,
                "name": "spatialaudio ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1836,
                "name": "Xp",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 901,
                "name": "xp4l",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1009,
                "name": "xpforlive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1709,
            "forum_user": {
                "id": 1707,
                "user": 1709,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profil4.png",
                "avatar_url": "/media/cache/49/37/4937ce84289a16db6f9d5ea374376dfb.jpg",
                "biography": "Fraction (Eric Raynaud) is a new media, composer and sound artist whose work focuses in particular on immersive and audiovisual experience  design.\n\nHis practice has developed from a background in music composition and spatial sound which led him to put together complete skills in the field of new media art. He now devotes his time writing and producing pieces integrating digital materials of different kinds.  He is particularly interested in forms of experience that have strong interactions between generative art and sonic matter. Combining complex scenography and hybrid digital writing with visuals, sound and physical media, he aims in particular to forge links between contemporary art and digital scope within the frame of radical experiences.\n\nFascinated by sound intensity, energy, ecstasy, and the idea of \"being able to sculpt digital disorder as a raw matter\", he finds in the lexicon of sound spatialization the appropriate field for designing atypical pieces, placing at the center of his writing the immediate physical and emotional experience.",
                "date_modified": "2025-12-29T12:55:11.027970+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fraction",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "xp-for-live-latest-news-and-future-insight-of-3d-audio-environment-for-ableton-live-using-spat",
        "pk": 2765,
        "published": true,
        "publish_date": "2024-02-22T11:47:49+01:00"
    },
    {
        "title": "Cognitive Feedback: Neuro-Affective Improvisation Between Brain, Code, and Sound, by Tiange Zhou & Marco Bidin",
        "description": "Cognitive Feedback is a project that maps a performer's brainwaves into live music performance. Using an EEG headset and Python, the system tracks mental states like focus and relaxation to control sound synthesis parameters in Max/MSP. This live improvisation is guided by pre-composed musical structures and spectral analysis. The result is a \"neuro-sonic ecosystem\" where the performer’s thoughts, pre-set musical rules, and live sound all evolve together as one instrument.",
        "content": "<p><strong><strong>➡️ This presentation is part of&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></strong></p>\r\n<p>Cognitive feedback investigates improvisation as a dynamic interplay between neural activity, pre-composed spectral intelligence, and live sound.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/1468f5909a7639715f42afa054dd90b5.jpg\" /></p>\r\n<p>The performer&rsquo;s BrainLink Pro headset captures eight EEG frequency bands&mdash;Delta, Theta, Low/High Alpha, Low/High Beta, Low/High Gamma&mdash;which are processed in Python to extract cognitive descriptors reflecting attention, relaxation, and oscillatory dynamics. These parameters modulate synthesis, spatialisation, and algorithmic transformations in Max/MSP, creating a live sonic environment that responds in real time to the performer&rsquo;s mental state.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2acdd21f085d44b31dcaa6b0e2889472.png\" /></p>\r\n<p>OpenMusic serves as a framework for algorithmic generation of spectral templates, partial distributions, structural seeds for improvisation, and for sound synthesis of fixed musical layers. Partiels is used in an offline analytical phase to provide high-resolution spectral decomposition of pre-composed materials. The analysis guides the mapping strategies and spectral vocabulary implemented in Max/MSP, ensuring that live EEG-driven improvisation unfolds within a rigorously structured, yet flexible sonic landscape.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f2441d0e542fb21b0a0b407a0bd2d9e1.png\" /></p>\r\n<p>By integrating offline spectral intelligence with real-time neuro-sonic feedback, the piece situates improvisation at the intersection of cognition, algorithmic reasoning, and auditory perception. The performer negotiates between intentional focus and emergent system behaviour, revealing improvisation as a neuro-sonic ecosystem in which thought, pre-analysed spectral structures, and sound co-evolve.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c044b56185dcfccad3df9e059e3ae889.jpg\" /></p>",
        "topics": [
            {
                "id": 4293,
                "name": "BrainLink Pro",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2788,
                "name": "Improvisation, generativity and co-creative interaction",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3194,
                "name": "Max 9",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 576,
                "name": "Partiels",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 53,
                "name": "Python",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20786,
            "forum_user": {
                "id": 20775,
                "user": 20786,
                "first_name": "Marco",
                "last_name": "Bidin",
                "avatar": "https://forum.ircam.fr/media/avatars/cv_pic.jpg",
                "avatar_url": "/media/cache/c8/12/c812194ab029dcbb2712b19a78eabf13.jpg",
                "biography": "Marco Bidin is a composer, artistic director, organist and harpsichord player from Italy.\n\nAfter his Organ degree in Italy, he studied Early Music performance in Trossingen and Contemporary Music performance in Stuttgart. Subsequently, under the guidance of Marco Stroppa, he completed the terminal degree (Konzertexamen) in Composition and the Certificate of Advanced Studies in Computer Music.\n\nMarco Bidin is active as an international composer and performer. He was invited in institutions like IRCAM (Paris, France), Shanghai Conservatory (China), Silpakorn University (Bangkok, Thailand) and Seoul National University (South Korea) among others.\n\nHe worked as a lecturer for Composition at the HMDK Stuttgart and as an organist for the Protestant Church in Stuttgart. 2010-2023 he was the artistic director of the italian-based NGO association ALEA. He is currently Associate Professor at the Electronic Instrument Engineering Department of the Xinghai Conservatory of Music in Guangzhou, China.",
                "date_modified": "2026-03-04T11:59:23.041276+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 988,
                        "forum_user": 20775,
                        "date_start": "2024-10-29",
                        "date_end": "2025-10-29",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    },
                    {
                        "id": 634,
                        "forum_user": 20775,
                        "date_start": "2023-11-16",
                        "date_end": "2024-11-16",
                        "type": 0,
                        "keys": [
                            {
                                "id": 155,
                                "membership": 634
                            },
                            {
                                "id": 406,
                                "membership": 634
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "mbalea",
            "first_name": "Marco",
            "last_name": "Bidin",
            "bookmarks": []
        },
        "slug": "cognitive-feedback-neuro-affective-improvisation-between-brain-code-and-sound-by-tiange-zhou-marco-bidin",
        "pk": 4404,
        "published": true,
        "publish_date": "2026-02-20T11:29:37+01:00"
    },
    {
        "title": "'Points of Articulation: Machine learning, speech synthesis and vocal music' by Seth Scott.",
        "description": "A presentation on composing vocal music using AI-powered speech synthesis technologies.",
        "content": "<p>In this presentation I will discuss the composition of vocal music using AI-powered speech synthesis technologies. I will focus in particular on a project entitled Points of Articulation, which I produced during a two-month Artistic Research Residency at IRCAM in 2025, discussing the tools that I used, the compositional processes that I developed, and my broader aesthetic framework. Incorporating elements of algorithmic writing, sound poetry and improvised music, the work explores the historical and cultural significance of speech synthesisers, and their relationship to technologies of knowledge production and the human body.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/bed691126fca3eddc721e9ddcf429857.png\" width=\"1661\" height=\"726\" /></p>\r\n<p></p>\r\n<p><a href=\"https://forum.ircam.fr/collections/detail/forum-ircam-latvia/\">This&nbsp;talk is&nbsp;part of IRCAM Forum Workshops Hors-les-Murs 2025 Rīga-Liepāja (Latvia)</a></p>",
        "topics": [
            {
                "id": 314,
                "name": "Ai",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 197,
                "name": "Voice synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 67830,
            "forum_user": {
                "id": 67760,
                "user": 67830,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/64bad8805de92ae4ac19ab2a66dd73a7?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-09-30T10:21:22.274645+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "sethscott",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "points-of-articulation-machine-learning-speech-synthesis-and-vocal-music-by-seth-scott",
        "pk": 3608,
        "published": true,
        "publish_date": "2025-08-09T13:03:30+02:00"
    },
    {
        "title": "On the use of HSR as an upmix solution for stereo reproduction on multi-speaker systems by Quentin Nivromont",
        "description": "Multi-speaker systems demand upmixing solutions that spatialize sound while preserving tonal balance and artistic intent. HSR (High Space Resolution) meets this challenge by transforming stereo for setups with up to 64 speakers, without relying on FFT. Designed for post-production,live events, car audio and home theaters, HSR ensures natural immersion and faithful reproduction of the original mix. Unlike traditional FFT-based tools, HSR avoids phase artifacts and respects original dynamics. This presentation explores HSR’s upmixing philosophy, compares it to stereo content processed through spatialization systems (e.g., IRCAM’s Spat), and demonstrates how combining both approaches enhances stereo reproduction on multi-speaker systems—delivering precision, flexibility, and artistic integrity.",
        "content": "<div>&nbsp;<strong>➡️ This presentation is part of<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/ircam-forum-workshops-paris-enghien-les-bains-2026/\">IRCAM Forum Workshops Paris / Enghien-les-Bains&nbsp;March 2026</a></strong></div>\r\n<p></p>\r\n<div><strong></strong></div>\r\n<div><strong>1. The Stereo Paradox in an Immersive World</strong></div>\r\n<div>&nbsp;</div>\r\n<div><strong>1.1. Stereo: A Spatial Encoding, Not Two Mono Channels</strong></div>\r\n<div>&nbsp;</div>\r\n<div>A stereo recording is not two independent audio signals. It is a correlated sound field encoded into two channels through deliberate inter-channel relationships &mdash; a principle established by Alan Blumlein in 1931 (UK Patent 394,325) and formalized through decades of psychoacoustic research (Blauert, <em>Spatial Hearing</em>, MIT Press, 1997; Rumsey, <em>Spatial Audio</em>, Focal Press, 2001).</div>\r\n<div>&nbsp;</div>\r\n<div>At any given moment, the stereo signal encodes:</div>\r\n<div>&nbsp;</div>\r\n<div>- <em>Localizable sources</em>&nbsp;&mdash; panned positions defined by Inter-channel Level Differences (ILD), perceived as phantom sources between the loudspeakers.</div>\r\n<div>- <em>Extended sources</em>&nbsp;&mdash; spatial width conveyed through partial inter-channel correlation, perceived as sources with apparent source width (ASW).</div>\r\n<div>- <em>Diffuse content</em>&nbsp;&mdash; ambience and reverberation encoded through low inter-channel coherence, contributing to listener envelopment (LEV).</div>\r\n<p>&nbsp;</p>\r\n<div>These parameters are measurable: Inter-Channel Coherence (ICC), Inter-channel Intensity Difference (IID), and Inter-channel Phase Difference (IPD) are standardized in ISO/IEC 23003-1 (MPEG Surround). The stereo signal is not a simplification &mdash; it is a complete spatial encoding within the constraints of two channels.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>1.2. The Hardware Has Outpaced the Content</strong></div>\r\n<p>&nbsp;</p>\r\n<div>Modern playback systems have far more loudspeakers than the two that stereo was designed for, yet the content pipeline remains overwhelmingly stereo. An estimated 97&ndash;99% of the world's recorded music catalog exists in stereo or mono format. Apple Music launched Spatial Audio with Dolby Atmos in May 2021 with \"thousands of songs\" &mdash; a fraction of its ~100 million track library (Apple Newsroom, May 2021). Streaming music, podcasts, broadcast content, and legacy archives are stereo.</div>\r\n<p>&nbsp;</p>\r\n<div>The result: multi-speaker systems play stereo through two speakers while the rest sit idle. The investment in immersive hardware produces no spatial benefit for the vast majority of content.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>1.3. The Missing Link</strong></div>\r\n<div>&nbsp;</div>\r\n<div>The challenge is clear: given that content is stereo and systems are multi-speaker, how do we bridge the gap faithfully?</div>\r\n<p>&nbsp;</p>\r\n<div>Three categories of existing solutions each fail in specific ways:</div>\r\n<p>&nbsp;</p>\r\n<div><em>Simple signal distribution (phantom, L/R&divide;2)</em>&nbsp;&mdash; routing L and R to additional speakers with attenuation. This is mathematically incorrect: energy summation is wrong, comb filtering occurs between correlated signals on multiple speakers, and the spatial field is not reconstructed but merely replicated.</div>\r\n<p>&nbsp;</p>\r\n<div><em>Spatialization systems (IRCAM Spat, L-ISA, d&amp;b Soundscape)</em>&nbsp;&mdash; these treat each input as a mono object with positional metadata. When a stereo pair is sent as two objects, the spatialization engine has no knowledge of their inter-channel relationship. It distributes L and R independently, destroying the correlated sound field, the encoded panning positions, and the diffuse/direct ratio. The result depends on the panning algorithm (VBAP, DBAP, Ambisonics, WFS), but none accounts for stereo inter-channel correlation.</div>\r\n<p>&nbsp;</p>\r\n<div><em>FFT-based upmixers</em> &mdash; frequency-domain analysis offers sophisticated spectral separation but introduces inherent artifacts, add latency and is quite heavy on cpu usage, wich make it unsuitable for live applications as well as in entry-level audio products.</div>\r\n<p>&nbsp;</p>\r\n<div>The missing link is an algorithm that <strong>understands stereo as a spatial encoding </strong>and reconstructs the encoded sound field across any number of loudspeakers &mdash; without spectral artifacts, without fabricating spatial content, and without ignoring inter-channel relationships.</div>\r\n<p>&nbsp;</p>\r\n<div>---</div>\r\n<p>&nbsp;</p>\r\n<div><strong>2. HSR: Design Philosophy and Architecture</strong></div>\r\n<p>&nbsp;</p>\r\n<div><strong>2.1. Core Principle: Decode the Sound Field, Not the Channels</strong></div>\r\n<p>&nbsp;</p>\r\n<div>HSR (High Space Resolution) is built on a single premise: <strong>stereo is a spatial encoding that must be decoded before it can be rendered to multiple loudspeakers</strong>.</div>\r\n<p>&nbsp;</p>\r\n<div>This is conceptually related to the primary-ambient decomposition framework established in the academic literature (Avendano &amp; Jot, <em>JAES</em>, 2004; Goodwin &amp; Jot, ICASSP, 2007; Faller &amp; Breebaart, AES 131st Convention, 2011), but with a critical distinction: HSR operates entirely in the time domain, without FFT, windowing, or frequency-domain transforms.</div>\r\n<p>&nbsp;</p>\r\n<div>The processing architecture has three stages:</div>\r\n<p>&nbsp;</p>\r\n<div><strong>2.2. Stage 1 &mdash; Inter-Channel Correlation Analysis</strong></div>\r\n<p>&nbsp;</p>\r\n<div>HSR continuously examines the relationship between L and R channels:</div>\r\n<p>&nbsp;</p>\r\n<div>- <em>Inter-channel level differences</em>: identifying where energy is positioned across the stereo panorama.</div>\r\n<div>- <em>Inter-channel phase relationships</em>: distinguishing coherent sources (high correlation) from diffuse content (low correlation).</div>\r\n<p>&nbsp;</p>\r\n<div>This analysis produces a continuous spatial map of the stereo field &mdash; not a discrete decomposition into \"center\" and \"sides\" (as in traditional Mid/Side processing), but a full distribution of energy across all panoramic positions.</div>\r\n<div>&nbsp;</div>\r\n<div>&nbsp;</div>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f7ed3ffdfe8cdd376032a616b2ab6d2c.png\" /></p>\r\n<div><strong>2.3. Stage 2 &mdash; Spatial Extraction</strong></div>\r\n<p>&nbsp;</p>\r\n<div>From the correlation analysis, HSR extracts spatial components as a continuous distribution. Each extracted component carries its position in the original stereo panorama, its energy level, and its coherence characteristics.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>2.4. Stage 3 &mdash; Output Distribution</strong></div>\r\n<p>&nbsp;</p>\r\n<div>The extracted spatial field is mapped to the target loudspeaker array. Each speaker receives a signal derived from the spatial components corresponding to its angular position, with:</div>\r\n<p>&nbsp;</p>\r\n<div>- <strong>Energy conservation</strong>: total acoustic power is maintained (iso-energy processing). Content does not get louder when spread across more speakers.</div>\r\n<div>- <strong>Spatial coherence preservation</strong>: correlated components remain correlated on the target array; diffuse components remain diffuse. No artificial decorrelation is added.</div>\r\n<div>- <strong>Timbral neutrality</strong>: no redundant correlated signals on adjacent speakers, avoiding the comb filtering that plagues simple distribution methods.</div>\r\n<p>&nbsp;</p>\r\n<div>The distribution adapts to any speaker configuration &mdash; symmetric or asymmetric.</div>\r\n<div>&nbsp;</div>\r\n<div><strong>3. HSR vs. Spatialization Systems: Complementary Tools</strong></div>\r\n<p>&nbsp;</p>\r\n<div><strong>3.1. The Spatialization Paradigm</strong></div>\r\n<p>&nbsp;</p>\r\n<div>Modern spatialization engines &mdash; IRCAM Spat/SPAT Revolution (Carpentier, Noisternig &amp; Warusfel, \"Twenty Years of Ircam Spat: Looking Back, Looking Forward,\" 41st ICMC, 2015), L-Acoustics L-ISA, d&amp;b Soundscape, Amadeus Holophonix &mdash; are designed for <strong>object-based audio</strong>. Each audio input is treated as a discrete point source with positional metadata, and the rendering engine calculates per-speaker amplitude coefficients using algorithms such as:</div>\r\n<p>&nbsp;</p>\r\n<div>- <strong>VBAP</strong>&nbsp;(Vector Base Amplitude Panning &mdash; Pulkki, <em>JAES</em>, 1997): point-like virtual sources positioned via loudspeaker triplet gain calculation.</div>\r\n<div>- <strong>DBAP</strong>&nbsp;(Distance-Based Amplitude Panning &mdash; Lossius, Baltazar &amp; de la Hogue, <em>SMC</em>, 2009): no assumptions about speaker layout or listener position; useful for irregular arrays and installations.</div>\r\n<div>- <strong>HOA</strong>&nbsp;(Higher Order Ambisonics &mdash; Gerzon, 1973; Daniel, 2000): scene-based encoding using spherical harmonics, decoded to any speaker configuration.</div>\r\n<div>- <strong>WFS</strong> (Wave Field Synthesis &mdash; Berkhout, <em>JAES</em>, 1988): physical wavefront reconstruction using dense speaker arrays, eliminating the sweet spot.</div>\r\n<p>&nbsp;</p>\r\n<div>These systems are powerful &mdash; but they are designed for mono source objects. When stereo content is introduced as two mono objects (L and R), the spatialization engine:</div>\r\n<div>&nbsp;</div>\r\n<div><strong>1</strong>. Has no information about inter-channel correlation.</div>\r\n<div><strong>2</strong>. Distributes L and R independently, ignoring their encoded spatial relationship.</div>\r\n<div><strong>3</strong>. Applies panning algorithms designed for point sources to what is actually a correlated sound field.</div>\r\n<div><strong>4</strong>. Produces output that may exhibit comb filtering, altered width, modified panning positions, and loss of envelopment &mdash; depending on the algorithm used and the source/speaker geometry.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>3.2. The L-ISA Stereo Mapper</strong></div>\r\n<div>&nbsp;</div>\r\n<div>L-Acoustics recognized this problem and introduced the <strong>Stereo Mappe</strong>r feature in L-ISA 3.0. The Stereo Mapper \"maps existing stereo content to an immersive speaker configuration without changing the original artist's mix,\" distributing stereo content \"while conserving a similar power distribution as traditional left/right array configurations to retain the original stereo image and overall mix.\" (L-Acoustics, 2025)</div>\r\n<p>&nbsp;</p>\r\n<div>This is a practical acknowledgment that stereo cannot simply be fed to a spatialization engine as two mono objects. It is, in essence, an upmixing solution within a spatialization framework.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>4.4. The Combined Approach: HSR + Spatialization</strong></div>\r\n<p>&nbsp;</p>\r\n<div>The most powerful configuration uses HSR as a <strong>preprocessing stage</strong>&nbsp;before a spatialization engine:</div>\r\n<div>&nbsp;</div>\r\n<div>Stereo (2 ch) &rarr; HSR &rarr; N spatial components &rarr; Spat / L-ISA / Soundscape &rarr; Speakers</div>\r\n<p>&nbsp;</p>\r\n<div>In this workflow:</div>\r\n<div><strong>1</strong>. HSR decodes the stereo field into N spatially coherent components (each component represents a portion of the continuous panoramic distribution).</div>\r\n<div><strong>2</strong>. Each component is fed to the spatialization engine as an independent object &mdash; but unlike raw L/R, each object carries spatially meaningful content with coherent positioning.</div>\r\n<div><strong>3</strong>. The spatialization engine applies its rendering algorithm (VBAP, Ambisonics, WFS) to objects that are already spatially decomposed, not arbitrarily split stereo channels.</div>\r\n<div>4. An additionnal algortithm, like ICS (interfeence conacellation system), allow to remove the resulting comb-filtering that may still occur. We must notice that compared to feeding a stereo signal directly yo multiple spekaers using a solution like HSR reduce the comb-filtering effect.</div>\r\n<p>&nbsp;</p>\r\n<div>The result: <strong>the fidelity of stereo-aware upmixing combined with the flexibility of object-based spatialization</strong>. The sound designer retains full control over spatial positioning while the stereo field's encoded spatial information is preserved rather than destroyed.</div>\r\n<p>&nbsp;</p>\r\n<div>---</div>\r\n<p>&nbsp;</p>\r\n<div><strong>4. Application Domains</strong></div>\r\n<p>&nbsp;</p>\r\n<div><strong>4.1. Live Sound</strong></div>\r\n<p>&nbsp;</p>\r\n<div>The live sound sector is where the stereo-to-immersive gap is most acute. Immersive systems &mdash; L-Acoustics L-ISA, d&amp;b Soundscape, Amadeus Holophonix &mdash; are increasingly deployed in venues and touring productions. But the majority of playback content (backing tracks, DJ sets, pre-recorded sound effects, interval music) arrives as stereo.</div>\r\n<p>&nbsp;</p>\r\n<div>HSR addresses this directly:</div>\r\n<div>- <strong>Touring</strong>: the front-of-house engineer mixes in stereo (standard workflow, universal compatibility). HSR distributes the stereo mix across whatever speaker configuration exists at each venue &mdash; arena, theater, festival &mdash; with no per-venue preparation. Using multiple buses he can also change the space reproduction for any stem included in the master signal.</div>\r\n<div>- <strong>Theatre</strong>: pre-recorded sound effects and playback tracks become spatial events that use the full installed system, without re-editing for multichannel.</div>\r\n<div>- <strong>DJ performance</strong>: the DJ's stereo output feeds HSR, which expands it to fill main arrays, side fills, and ceiling speakers. The DJ works as always; the audience experiences spatial immersion.</div>\r\n<p>&nbsp;</p>\r\n<div>The 5-sample latency is critical: live sound is latency-intolerant. Monitor systems, front-of-house alignment, and time-aligned delay towers all require sub-millisecond processing delays. HSR's 104 &micro;s latency at 48 kHz is negligible in any live audio chain.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>4.2. Automotive</strong></div>\r\n<p>&nbsp;</p>\r\n<div>Modern premium automotive audio systems feature a numerous number of speakers distributed across doors, dashboard, A-pillars, headliner, rear deck, and subwoofer enclosures. A solution like HSR allow to manage all those speakers, with an asymetrical repduction due to the main position of the driver.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>4.3. Home Theater and Consumer Electronics</strong></div>\r\n<p>&nbsp;</p>\r\n<div>Soundbars (5&ndash;13 drivers), Atmos-enabled receivers (7.1.4, 9.1.6), and whole-home audio systems face the same content gap. HSR provides:</div>\r\n<div>- Meaningful utilization of every driver in the system for stereo content.</div>\r\n<div>- Center channel content derived from the stereo field (not a mono sum with comb filtering artifacts).</div>\r\n<div>- Height speakers receiving spatially appropriate content (not disconnected ambience).</div>\r\n<div>- Video synchronization guaranteed by sub-millisecond latency.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>4.4. Broadcast</strong></div>\r\n<p>&nbsp;</p>\r\n<div>Broadcast facilities live in format mismatch: legacy archives are stereo, live feeds arrive stereo, international content varies. HSR provides artifact-free format conversion:</div>\r\n<div>- No pre-echo on speech transients (critical for dialogue intelligibility).</div>\r\n<div>- No musical noise during quiet passages.</div>\r\n<div>- No spectral smearing on complex material.</div>\r\n<div>- Real-time, on-air operation with broadcast reliability.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>5. Spacelite: HSR in Practice</strong></div>\r\n<p>&nbsp;</p>\r\n<div><strong>Spacelite</strong>&nbsp;is the software implementation of HSR, available as a standalone application for macOS and Windows.</div>\r\n<div>&nbsp;</div>\r\n<div><strong>HSR upmix engine : </strong>Stereo &rarr; any speaker configuration&nbsp;</div>\r\n<div><strong>4 stereo inputs </strong>: Mix multiple stems simultaneously</div>\r\n<div><strong>Full routing matrix </strong>: Per-channel weight, pan, and gain</div>\r\n<div><strong>Preset system </strong>: Save and recall complete configurations</div>\r\n<div><strong>MIDI/OSC contro</strong>l : External automation and integration</div>\r\n<div><strong>HCC algorithm bass management</strong> : Phase-aware subwoofer crossover</div>\r\n<div>&nbsp;</div>\r\n<div>Spacelite is designed for immediate deployment: define your speaker positions, connect stereo sources, and the system produces spatial output in minutes. No spatial mixing expertise required; no content preparation necessary.</div>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e06c5ab6d0ce06d0aced4d2c5390f401.png\" /></p>\r\n<div><strong>6. Conclusion: Upmixing as a Necessary Discipline</strong></div>\r\n<p>&nbsp;</p>\r\n<div>The audio industry is experiencing a fundamental asymmetry: playback systems have multiplied their speaker count while the content catalog remains overwhelmingly stereo. This gap cannot be solved by native immersive production alone &mdash; the economics and logistics of re-mixing decades of stereo content for multichannel are prohibitive, and new stereo content continues to be produced at vastly greater volume than native immersive content.</div>\r\n<p>&nbsp;</p>\r\n<div>Upmixing is not a compromise. It is the technically correct solution to a real engineering problem: <em>how to render a two-channel spatial encoding faithfully across a multi-speaker array</em>. The quality of the solution depends entirely on the quality of the algorithm.</div>\r\n<p>&nbsp;</p>\r\n<div><strong>HSR addresses this problem from first principles:</strong></div>\r\n<div>- It treats stereo as what it is: a spatial encoding, not two mono signals.</div>\r\n<div>- It operates in the time domain, eliminating the artifact classes inherent to frequency-domain processing.</div>\r\n<div>- It preserves artistic intent by extracting and redistributing the spatial information that mix engineers encoded &mdash; not by fabricating new spatial content.</div>\r\n<div>- It complements existing spatialization systems rather than competing with them &mdash; the combined HSR + Spat workflow demonstrates that upmixing and spatialization are not alternatives but complementary stages in the spatial audio chain.</div>\r\n<p>&nbsp;</p>\r\n<div>The missing link in the immersive audio chain is not more speakers, more formats, or more metadata. It is a faithful stereo decoder. That is what HSR provides.</div>\r\n<p>&nbsp;</p>\r\n<div>---</div>\r\n<p>&nbsp;</p>\r\n<div><strong>References</strong></div>\r\n<p>&nbsp;</p>\r\n<div><strong>Standards</strong></div>\r\n<p>&nbsp;</p>\r\n<div><em>- ISO/IEC 23003-1 &mdash; MPEG Surround (Spatial Audio Coding), parametric stereo parameters (ICC, IID, IPD).</em></div>\r\n<div><em>- ITU-R BS.775-4 (2022) &mdash; \"Multichannel stereophonic sound system with and without accompanying picture.\"</em></div>\r\n<div><em>- ITU-R BS.2051-3 (2022) &mdash; \"Advanced sound system for programme production.\"</em></div>\r\n<p>&nbsp;</p>\r\n<div><strong>Patents</strong></div>\r\n<p>&nbsp;</p>\r\n<div><em>- Blumlein, A.D. &mdash; UK Patent 394,325, \"Improvements in and relating to Sound-transmission, Sound-recording and Sound-reproducing Systems,\" filed 14 December 1931, accepted 14 June 1933.</em></div>\r\n<p>&nbsp;</p>\r\n<div><strong>Peer-Reviewed Publications</strong></div>\r\n<p>&nbsp;</p>\r\n<div><em>- Avendano, C. &amp; Jot, J.-M. &mdash; \"A Frequency-Domain Approach to Multichannel Upmix,\" *J. Audio Eng. Soc.*, vol. 52, no. 7/8, pp. 740&ndash;749, 2004.</em></div>\r\n<div><em>- Berkhout, A.J. &mdash; \"A holographic approach to acoustic control,\" *J. Audio Eng. Soc.*, December 1988.</em></div>\r\n<div><em>- Berouti, M., Schwartz, R. &amp; Makhoul, J. &mdash; \"Enhancement of speech corrupted by acoustic noise,\" *Proc. ICASSP*, 1979.</em></div>\r\n<div><em>- Blauert, J. &mdash; *Spatial Hearing: The Psychophysics of Human Sound Localization*, revised edition, MIT Press, 1997. [Open Access](https://direct.mit.edu/books/oa-monograph/4885/Spatial-Hearing)</em></div>\r\n<div><em>- Carpentier, T., Noisternig, M. &amp; Warusfel, O. &mdash; \"Twenty Years of Ircam Spat: Looking Back, Looking Forward,\" 41st International Computer Music Conference, 2015. [ResearchGate](https://www.researchgate.net/publication/298982788)</em></div>\r\n<div><em>- Faller, C. &amp; Breebaart, J. &mdash; \"Binaural Reproduction of Stereo Signals Using Upmixing and Diffuse Rendering,\" AES 131st Convention, 2011.</em></div>\r\n<div><em>- Gerzon, M.A. &mdash; \"Periphony: With-Height Sound Reproduction,\" *J. Audio Eng. Soc.*, vol. 21, no. 1, pp. 2&ndash;10, 1973.</em></div>\r\n<div><em>- Goodwin, M. &amp; Jot, J.-M. &mdash; \"Primary-Ambient Signal Decomposition and Vector-Based Localization for Spatial Audio Coding and Enhancement,\" *Proc. ICASSP*, 2007.</em></div>\r\n<div><em>- Goodwin, M. &amp; Jot, J.-M. &mdash; \"Spatial Audio Scene Coding,\" AES 125th Convention, 2008. [AES E-Library](https://www.aes.org/e-lib/browse.cfm?elib=14334)</em></div>\r\n<div><em>- Lossius, T., Baltazar, P. &amp; de la Hogue, T. &mdash; \"DBAP &mdash; Distance-Based Amplitude Panning,\" *Proc. SMC*, 2009.</em></div>\r\n<div><em>- Painter, T. &amp; Spanias, A. &mdash; \"Perceptual coding of digital audio,\" *Proc. IEEE*, vol. 88, no. 4, pp. 451&ndash;515, 2000.</em></div>\r\n<div><em>- Pulkki, V. &mdash; \"Virtual Sound Source Positioning Using Vector Base Amplitude Panning,\" *J. Audio Eng. Soc.*, vol. 45, no. 6, pp. 456&ndash;466, 1997. [AES E-Library](https://aes.org/publications/elibrary-page/?id=7853)</em></div>\r\n<div><em>- Rumsey, F. &mdash; *Spatial Audio*, Focal Press / Routledge, 2001.</em></div>\r\n<div><em>- Vickers, E. &mdash; \"Fixing the Phantom Center: Diffusing Acoustical Crosstalk,\" AES 127th Convention, Paper 7916, 2009.</em></div>\r\n<div><em>- Zotter, F. &amp; Frank, M. &mdash; *Ambisonics: A Practical 3D Audio Theory*, Springer, 2019. [Springer](https://link.springer.com/book/10.1007/978-3-030-17207-7)</em></div>\r\n<p>&nbsp;</p>\r\n<div><strong>Spatialization Systems Documentation</strong></div>\r\n<p>&nbsp;</p>\r\n<div><em>- IRCAM / FLUX:: Immersive &mdash; [SPAT Revolution Documentation](https://doc.flux.audio/spat-revolution/Spatialisation_Technology_Panning_Algorithms.html)</em></div>\r\n<div><em>- L-Acoustics &mdash; [L-ISA Immersive](https://www.l-acoustics.com/products/l-isa-immersive/)</em></div>\r\n<div><em>- L-Acoustics &mdash; [L-ISA 3.0 Stereo Mapper](https://www.l-acoustics.com/press-releases/l-acoustics-launches-l-isa-3-0-the-most-powerful-and-accessible-immersive-audio-platform-for-live-audio-professionals-and-music-creators/)</em></div>\r\n<div><em>- d&amp;b audiotechnik &mdash; [Soundscape](https://www.dbaudio.com/global/en/solutions/soundscape/)</em></div>\r\n<div><em>- Amadeus &mdash; [Holophonix](https://music-group.com/holophonix/)</em></div>\r\n<p>&nbsp;</p>\r\n<div><strong>Industry Sources</strong></div>\r\n<p>&nbsp;</p>\r\n<div><em>- Apple Newsroom &mdash; [Apple Music Announces Spatial Audio and Lossless Audio](https://www.apple.com/newsroom/2021/05/apple-music-announces-spatial-audio-and-lossless-audio/), May 2021.</em></div>\r\n<div><em>- vrtonung.de &mdash; [Dolby Atmos Car Spatial Audio &mdash; Overview of Automotive Brands](https://www.vrtonung.de/en/dolby-atoms-for-cars-automotive-brands-spatial-audio-overview/)</em></div>\r\n<div><em>- DAM Audio &mdash; [HSR Upmix Technology](https://www.dam-audio.com/research/hsr-upmix-technology)</em></div>\r\n<div><em>- DAM Audio &mdash; [Spacelite](https://www.dam-audio.com/spacelite-standalone)</em></div>",
        "topics": [
            {
                "id": 2342,
                "name": "3d audio",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4332,
                "name": "soundfield",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4331,
                "name": "stereo",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4330,
                "name": "upmix",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 26272,
            "forum_user": {
                "id": 26245,
                "user": 26272,
                "first_name": "Quentin",
                "last_name": "Nivromont",
                "avatar": "https://forum.ircam.fr/media/avatars/1770214308083.jpeg",
                "avatar_url": "/media/cache/7b/0f/7b0f8c7e4042bf2c899f179fcfb4be76.jpg",
                "biography": "Sound & DSP engineer  for 15+ years, I am an expert in 3D audio spatialization and the founder of Digital Audio Manufacture (DAM Audio). Having also worked for companies such as IRCAM, Devialet, and Amadeus, I develop innovative algorithms for upmixing, sound spatialization (on speakers and in binaural), reverberation, and system calibration. These lightweight and user-friendly solutions are designed for industrial clients, studios, and live entertainment professionals.",
                "date_modified": "2026-03-03T14:21:31.348683+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 575,
                        "forum_user": 26245,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-24",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "nivromont",
            "first_name": "Quentin",
            "last_name": "Nivromont",
            "bookmarks": []
        },
        "slug": "on-the-use-of-an-hsr-as-an-upmix-solution-for-stereo-reproduction-on-multi-speaker-systems",
        "pk": 4431,
        "published": true,
        "publish_date": "2026-02-26T19:40:01+01:00"
    },
    {
        "title": "Les formations professionnelles de l’Ircam 2022-2023",
        "description": "Découvrez le choix de formations professionnelles proposées par l'Ircam sur les technologies du Forum.",
        "content": "<p>Les formations sur les logiciels de l'Ircam sont destin&eacute;es aux&nbsp;professionnels&nbsp;du spectacle dans les domaines suivants : cin&eacute;ma, musique, danse, th&eacute;&acirc;tre et artistes auteur<span>&bull;e&bull;</span>s ; compositeur<span>&bull;trice&bull;</span>s de musique informatique ; enseignant<span>&bull;e&bull;</span>s dans le domaine de la musique &eacute;lectroacoustique et la musique assist&eacute;e par ordinateur ; scientifiques dans le domaine de l&rsquo;informatique musicale et ing&eacute;nierie sonore.</p>\r\n<h2></h2>\r\n<p></p>\r\n<h4 style=\"text-align: left;\">Les abonn&eacute;s au Forum Premium b&eacute;n&eacute;ficient d'une r&eacute;duction de 40 &agrave; 60% de r&eacute;duction sur toutes les formations Ircam</h4>\r\n<p>Les membres du<span>&nbsp;</span><a href=\"https://www.ircam.fr/innovations/le-forum/\">Forum</a><span>&nbsp;</span>ayant souscrit &agrave; l'abonnement<span>&nbsp;</span><a href=\"https://forum.ircam.fr/about/welcome/\">Forum Premium</a>&nbsp;b&eacute;n&eacute;ficient de tarifs d&eacute;gressifs sur la saison en cours, en fonction du nombre de formations achet&eacute;es : 40% de r&eacute;duction sur le plein tarif pour le 1er stage achet&eacute;, 50% sur le 2e stage, 60% sur le 3e stage et les suivants.&nbsp;<br /><br /><strong>Tarifs pr&eacute;f&eacute;rentiels pour les &eacute;tudiants</strong><br /><br />Les &eacute;tudiants b&eacute;n&eacute;ficient de 50% de r&eacute;duction sur tous les stages.</p>\r\n<p></p>\r\n<blockquote>\r\n<p><strong><a href=\"https://www.ircam.fr/transmission/formations-professionnelles\">Les formations :&nbsp;</a></strong><img src=\"/media/uploads/Event/formation_programme_22_23.jpg\" alt=\"\" /></p>\r\n</blockquote>\r\n<p><img src=\"/media/uploads/Event/programme_formation_22_23.png\" alt=\"\" width=\"715\" height=\"877\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><a href=\"https://www.ircam.fr/transmission/formations-professionnelles\"><strong>PLUS D'INFORMATIONS PRATIQUES</strong></a></p>\r\n<p><a href=\"https://www.ircam.fr/transmission/formations-professionnelles\"><strong></strong></a></p>\r\n<p></p>\r\n<h4 style=\"text-align: center;\"><a href=\"https://www.ircam.fr/innovations/abonnements-du-forum/ \">S'abonner</a></h4>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><img src=\"/media/uploads/images/Articles/capture_d&rsquo;écran_2020-07-30_à_11.51.05.png\" alt=\"\" width=\"815\" height=\"377\" /></p>\r\n<h6>&nbsp;</h6>\r\n<h3 style=\"text-align: center;\">Sur quelles technologies me&nbsp;former ?</h3>\r\n<h6 style=\"text-align: center;\"></h6>\r\n<p style=\"text-align: center;\">Vous travaillez sur les<span>&nbsp;</span><strong>interactions en temps r&eacute;el</strong>, m&ecirc;lant environnement sonore, multim&eacute;dia, r&eacute;alit&eacute; virtuelle, etc. ?&nbsp;</p>\r\n<p style=\"text-align: center;\"></p>\r\n<p><img src=\"/media/uploads/Softwares/Max8/14400-thumbnail.jpg\" alt=\"\" width=\"360\" height=\"189\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h4 style=\"text-align: center;\">Max, Max for Live</h4>\r\n<p style=\"text-align: center;\"><em></em></p>\r\n<p style=\"text-align: center;\">Certification<span>&nbsp;</span><a href=\"https://www.ircam.fr/agenda/max-initiation-session-1-certification-max-niveau-1-2022/detail\">Max niveau 1 (session 1)</a>&nbsp;,&nbsp;<a href=\"https://www.ircam.fr/agenda/max-initiation-session-2-certification-max-niveau-1-1/detail\">Max niveau 1 (session 2)</a>,&nbsp;<a href=\"https://www.ircam.fr/agenda/max-perfectionnement-certification-max-niveau-2-2023/detail\">Max Perfectionnement (niveau 2)<span>&nbsp;</span></a>et<span>&nbsp;</span><a href=\"https://www.ircam.fr/agenda/max-initiation-anglais-2022/detail\">Max Initiation en anglais</a></p>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"><a href=\"https://forum.ircam.fr/projects/detail/max-8/\">La technologie</a>&nbsp;&nbsp; &nbsp;<span>&nbsp;</span><a href=\"https://www.ircam.fr/transmission/formations-professionnelles/max-max4live\">Le programme des formations</a><span>&nbsp;</span>&nbsp;<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/max-world/\">Max World</a></p>\r\n<p></p>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td>\r\n<p></p>\r\n</td>\r\n</tr>\r\n</tbody>\r\n</table>\r\n<p style=\"text-align: center;\"><br />Vous travaillez sur la<span>&nbsp;</span><strong>composition assist&eacute;e par ordinateur</strong><span>&nbsp;</span>?&nbsp;</p>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"><strong></strong></p>\r\n<p style=\"text-align: center;\"><strong><img src=\"/media/uploads/Softwares/OpenMusic/om2-880x500.png\" alt=\"\" width=\"375\" height=\"213\" /></strong></p>\r\n<p style=\"text-align: center;\"><strong></strong></p>\r\n<h4 style=\"text-align: center;\"><strong>OpenMusic</strong></h4>\r\n<p style=\"text-align: center;\"><a href=\"https://forum.ircam.fr/projects/detail/openmusic/\">La technologie</a><span>&nbsp;</span>&nbsp; &nbsp;<span>&nbsp;</span><a href=\"https://www.ircam.fr/transmission/formations-professionnelles/openmusic\">Le programme des formations</a><span>&nbsp;</span>&nbsp; &nbsp;&nbsp;<span>&nbsp;</span><a href=\"https://forum.ircam.fr/collections/detail/openmusic-world/\">OpenMusic World</a></p>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td>\r\n<p></p>\r\n</td>\r\n</tr>\r\n</tbody>\r\n</table>\r\n<p></p>\r\n<p style=\"text-align: center;\">Vous souhaitez cr&eacute;er des instruments virtuels et travailler sur le<span>&nbsp;</span><strong>traitement du son</strong><span>&nbsp;</span>?</p>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"><img src=\"/media/uploads/Softwares/Modalys/modalys.jpg\" alt=\"\" width=\"348\" height=\"262\" /></p>\r\n<p style=\"text-align: center;\"></p>\r\n<h4 style=\"text-align: center;\">Modalys</h4>\r\n<p style=\"text-align: center;\"><a href=\"https://forum.ircam.fr/projects/detail/modalys/\">La technologie</a>&nbsp; &nbsp;<span>&nbsp;</span><a href=\"https://www.ircam.fr/transmission/formations-professionnelles/modalys\">Le programme des formations</a></p>\r\n<table>\r\n<tbody>\r\n<tr style=\"text-align: center;\">\r\n<td>\r\n<p></p>\r\n</td>\r\n</tr>\r\n<tr>\r\n<td>\r\n<p style=\"text-align: center;\">Vous souhaitez apprendre &agrave;<span>&nbsp;</span><strong>&eacute;tirer et transposer du son</strong><span>&nbsp;</span>sur une interface graphique et r&eacute;active ?</p>\r\n<p></p>\r\n<p></p>\r\n<p><img src=\"/media/uploads/Softwares/TS2/ts2.jpg\" alt=\"\" width=\"363\" height=\"204\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h4 style=\"text-align: center;\">TS2 et Partiels</h4>\r\n<p style=\"text-align: center;\"><a href=\"https://forum.ircam.fr/projects/detail/ts2/\">TS2&nbsp;</a>&nbsp; &nbsp; &nbsp;<a href=\"https://forum.ircam.fr/projects/detail/partiels/\">Partiels</a>&nbsp; &nbsp; &nbsp;&nbsp;<a href=\"https://www.ircam.fr/transmission/formations-professionnelles/transposition-et-stretching\">Le programme des formations</a></p>\r\n</td>\r\n</tr>\r\n<tr>\r\n<td>\r\n<h4 style=\"text-align: center;\"></h4>\r\n<p style=\"text-align: center;\"><img src=\"/media/uploads/formations 22/spacialisation_sonore.png\" alt=\"\" width=\"299\" height=\"156\" /></p>\r\n<h4 style=\"text-align: center;\">Spatialisation sonore&nbsp;</h4>\r\n<p style=\"text-align: center;\"><a href=\"https://forum.ircam.fr/collections/detail/Spatialisation/\">La technologie</a>&nbsp; &nbsp; &nbsp;<a href=\"https://www.ircam.fr/transmission/formations-professionnelles/spatialisation-sonore\">Le programme des formations&nbsp;</a></p>\r\n</td>\r\n</tr>\r\n<tr>\r\n<td>\r\n<p></p>\r\n<p><img src=\"/media/uploads/formations 22/depuredata.png\" alt=\"\" width=\"297\" height=\"152\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<h5 style=\"text-align: center;\"><span>De PureData aux plugins audio</span></h5>\r\n<p style=\"text-align: center;\"><span><span>&nbsp; &nbsp;</span><span><span>&nbsp;</span></span><a href=\"https://www.ircam.fr/transmission/formations-professionnelles/du-pure-data-aux-plugins-audio\">Le programme des formations&nbsp;</a></span></p>\r\n</td>\r\n</tr>\r\n</tbody>\r\n</table>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-align: center;\"><span><a href=\"/media/uploads/formations 22/capteurs.png\"></a></span></p>\r\n<p style=\"text-align: center;\"><span><img src=\"/media/uploads/formations 22/capteurs.png\" alt=\"\" width=\"284\" height=\"154\" /></span></p>\r\n<h5 style=\"text-align: center;\"><span><span>Capteurs, interfaces et machine learning interactif</span></span></h5>\r\n<p style=\"text-align: center;\"><span>&nbsp;</span><span><span>&nbsp;</span></span><a href=\"https://www.ircam.fr/transmission/formations-professionnelles/capteurs-de-mouvement\">Le programme des formations&nbsp;</a></p>\r\n<p style=\"text-align: center;\"></p>\r\n<h4 style=\"text-align: left;\"><a href=\"https://www.ircam.fr/transmission/formations-professionnelles/autres-formations\"><span>Autres formations non programm&eacute;es</span></a></h4>\r\n<p></p>",
        "topics": [
            {
                "id": 254,
                "name": "Certification max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 253,
                "name": "Composition Assistée par Ordinateur",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 249,
                "name": "Formations professionnelles",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 251,
                "name": "Intercation temps réel",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 250,
                "name": "Logiciels",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 252,
                "name": "Traitement du son",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 248,
                "name": "Transmettre",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17579,
            "forum_user": {
                "id": 17576,
                "user": 17579,
                "first_name": "Stephanie",
                "last_name": "Leroy",
                "avatar": "https://forum.ircam.fr/media/avatars/avatar.512.png",
                "avatar_url": "/media/cache/ef/2a/ef2abec1d6fe7fca40b50f8b6a2a4b1e.jpg",
                "biography": "",
                "date_modified": "2025-10-31T10:58:26.082584+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 440,
                        "forum_user": 17576,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 926,
                                "membership": 440
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "sleroy",
            "first_name": "Stephanie",
            "last_name": "Leroy",
            "bookmarks": []
        },
        "slug": "les-formations-professionnelles-a-lircam-2021-2022",
        "pk": 274,
        "published": true,
        "publish_date": "2021-09-13T10:43:48+02:00"
    },
    {
        "title": "CAC sketchbook: Linear A",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;I would like to lead a small presentation on the tools I created to compose my work Linear A for Bohlen Pierce clarinet and electronics. The piece was composed entirely using Computer-Assisted Compostion tools created with the bach library. With these patches, I could compose lines for the BP clarinet in bach.rolls, using a good microtonal approximation and with meter-free notation that would later be quantized. Using bach markers to denote canonic replies to these lines (at varying transpositions and speeds), I was able to visualize and hear the entire canonic texture. A further patch allowed the automatic export of the bach score to an Antescofo~ file which would convert the notated polyphonic texture to instructions for performance using buffers piloted by SuperVP. All in all, I was able to created an integrated environment for working with canon, microtones, and electronics, unhindered by the usual constraints of Finale/Sibelius/Dorico, and pushed it to created a complete piece and an electronic score. I would be thrilled to share this research.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">The piece was composed entirely using Computer-Assisted Compostion tools created with the bach library. With these patches, I could compose lines for the BP clarinet in bach.rolls, using a good microtonal approximation and with meter-free notation that would later be quantized. Using bach markers to denote canonic replies to these lines (at varying transpositions and speeds), I was able to visualize and hear the entire canonic texture. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;I would like to lead a small presentation on the tools I created to compose my work Linear A for Bohlen Pierce clarinet and electronics. The piece was composed entirely using Computer-Assisted Compostion tools created with the bach library. With these patches, I could compose lines for the BP clarinet in bach.rolls, using a good microtonal approximation and with meter-free notation that would later be quantized. Using bach markers to denote canonic replies to these lines (at varying transpositions and speeds), I was able to visualize and hear the entire canonic texture. A further patch allowed the automatic export of the bach score to an Antescofo~ file which would convert the notated polyphonic texture to instructions for performance using buffers piloted by SuperVP. All in all, I was able to created an integrated environment for working with canon, microtones, and electronics, unhindered by the usual constraints of Finale/Sibelius/Dorico, and pushed it to created a complete piece and an electronic score. I would be thrilled to share this research.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:4606,&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;15&quot;:&quot;Arial&quot;}\">A further patch allowed the automatic export of the bach score to an Antescofo~ file which would convert the notated polyphonic texture to instructions for performance using buffers piloted by SuperVP. All in all, I was able to created an integrated environment for working with canon, microtones, and electronics, unhindered by the usual constraints of Finale/Sibelius/Dorico, and pushed it to created a complete piece and an electronic score. I would be thrilled to share this research.</span></p>",
        "topics": [],
        "user": {
            "pk": 1536,
            "forum_user": {
                "id": 1535,
                "user": 1536,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Trapani-headshot-dolapdere.jpeg",
                "avatar_url": "/media/cache/f1/05/f1057758822d06d5f033ee7d0726d098.jpg",
                "biography": "The American/Italian composer Christopher Trapani was born in New Orleans, Louisiana. He earned a Bachelor’s degree from Harvard, a Master’s degree at the Royal College of Music, and a doctorate from Columbia University. He spent a year in Istanbul on a Fulbright grant, studying microtonality in Ottoman music, and nearly seven years in Paris, including several working at IRCAM. He now lives in Palermo and Los Angeles.\r\n\r\nChristopher’s honors include the 2016-17 Rome Prize, a 2019 Guggenheim Fellowship, and the 2007 Gaudeamus Prize. He has received commissions from the Fromm Foundation (2019), the Koussevitzky Foundation (2018), and Chamber Music America (2015). His debut CD, Waterlines, was released on New Focus Recordings in 2018, and the follow-up Horizontal Drift appeared in 2022.",
                "date_modified": "2026-02-13T21:04:12.014147+01:00",
                "is_premium": true,
                "is_internal_user": false,
                "vip": true,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 512,
                        "forum_user": 1535,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-22",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "ctrapani",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "cac-sketchbook-linear-a",
        "pk": 1344,
        "published": true,
        "publish_date": "2022-09-13T17:14:19+02:00"
    },
    {
        "title": "Le ressort non-linéaire",
        "description": "Résidence en recherche artistique 2018.19.\r\nHans Peter Stubbe Teglbjærg.\r\nEn collaboration avec l'équipe Systèmes et Signaux Sonores : Audio/Acoustique, instruMents de l'Ircam et du Zentrum für Kunst und Medien (ZKM).",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\"></h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>R&eacute;sidence en recherche artistique 2018.19</h3>\r\n<p><strong>&laquo; Le ressort non-lin&eacute;aire &raquo;</strong><br />En collaboration avec l'&eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/systemes-et-signaux-sonores-audioacoustique-instruments-s3am/\">Syst&egrave;mes et Signaux Sonores : Audio/Acoustique, instruMents</a><span>&nbsp;</span>de l'Ircam et du<span>&nbsp;</span><a href=\"http://zkm.de/\" target=\"_blank\">Zentrum f&uuml;r Kunst und Medien</a><span>&nbsp;</span>(ZKM).</p>\r\n<p>Le projet &laquo; ressort non lin&eacute;aire &raquo; s'int&eacute;resse au &laquo; couplage non lin&eacute;aire &raquo; et &laquo; couplage progressif &raquo;. Inspiration de la mod&eacute;lisation physique, la synth&egrave;se est &laquo; transpos&eacute;e &raquo; sur la conception d'une configuration musicale, dont la complexit&eacute; n&eacute;cessite une exp&eacute;rimentation th&eacute;orique et pratique dans le domaine de la mod&eacute;lisation physique pour la ma&icirc;triser.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">Hans Peter Stubbe Teglbj&aelig;rg</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202019/.thumbnails/hans_peter_stubbe_teglbjaerg.jpg/hans_peter_stubbe_teglbjaerg-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Hans Peter Stubbe Teglbj&aelig;rg &eacute;tudie la composition instrumentale et &eacute;lectronique aupr&egrave;s d'Ib N&oslash;rholm et d&rsquo;Ivar Frounberg au Conservatoire royal de musique du Danemark (1986-1991) et aupr&egrave;s de Jan W. Morthenson &agrave; Stockholm (Su&egrave;de). Il suit des &eacute;tudes de composition par ordinateur &agrave; l&rsquo;Institut de sonologie du Conservatoire royal de La Haye (Pays bas) de 1991 &agrave; 1993 ainsi qu&rsquo;avec Tristan Murail et Brian Ferneyhough dans le cadre du Cursus de composition et d&rsquo;informatique musicale de l&rsquo;Ircam o&ugrave; il a &eacute;galement &eacute;t&eacute; compositeur en recherche et enseignant.</p>\r\n<p>Hans Peter Stubbe s'int&eacute;resse particuli&egrave;rement au caract&egrave;re physique/acoustique des instruments et &agrave; la ph&eacute;nom&eacute;nologie des sons naturels. Il acquiert des connaissances approfondies dans les domaines de la composition assist&eacute;e par ordinateur, de la synth&egrave;se sonore et de la spatialisation. Il s&rsquo;implique &eacute;galement dans l&rsquo;interpr&eacute;tation, l&rsquo;interaction et la diffusion de la musique &eacute;lectronique et aime &agrave; collaborer avec d'autres formes d'art. Il compose des &oelig;uvres vocales, instrumentales, pour instruments et &eacute;lectronique, pour bande, pour la sc&egrave;ne, pour des installations audiovisuelles et des vid&eacute;os d'art. Sa musique est donn&eacute;e principalement en Europe et est enregistr&eacute;e chez DaCapo, Media Artes et Kontrapunkt.</p>\r\n<p>En 1990, Hans Peter Stubbe cofonde l&rsquo;ATHELAS Sinfonietta de Copenhague. En 1996, le Conseil des arts du Danemark lui d&eacute;cerne une bourse de 3 ans.&nbsp; Il enseigne la composition &eacute;lectroacoustique au Conservatoire royal de Copenhague depuis 2001 et donne r&eacute;guli&egrave;rement des cours d'informatique musicale. Il participe &agrave; plusieurs projets internationaux consacr&eacute;s au d&eacute;veloppement d'outils pour le contr&ocirc;le de la synth&egrave;se sonore. En 2008-2009, il est compositeur en recherche &agrave; l&rsquo;Ircam et, entre 2009 et 2011, il est en r&eacute;sidence &agrave; l'Orchestre symphonique d'&Aring;rhus (Danemark) pour lequel il compose deux &oelig;uvres pour orchestre.</p>\r\n<p></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "le-ressort-non-lineaire",
        "pk": 27,
        "published": true,
        "publish_date": "2019-03-21T16:48:32+01:00"
    },
    {
        "title": "OpenMusic 7.6 News : latest features",
        "description": "Karim Haddad and Steven Socha present OpenMusic 7.6's latest features.",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div class=\"c-content__button\">\r\n<div>\r\n<div>\r\n<div>\r\n<div>&nbsp;</div>\r\n<div><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/ban_openmusic-384x157.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></div>\r\n</div>\r\n<br /><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"></a>\r\n<div><a href=\"https://forum.ircam.fr/profile/haddad/\"><strong>Karim Haddad</strong></a>&nbsp;and&nbsp;<a href=\"https://forum.ircam.fr/profile/socha/?view=profile\"><strong>Steven Socha</strong>&nbsp;</a>present OpenMusic 7.6's latest features, improvements, and bug fixes, with a focus on the new Equal Divisions of the Octave (EDO) tunings, their notation systems, and their application in OpenMusic and FluidSynth</div>\r\n<div></div>\r\n<div><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/df79bd17881d36ec6f0fbd9b98783a76.png\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 175,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 311,
                "name": "Om",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1265,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 14,
            "forum_user": {
                "id": 14,
                "user": 14,
                "first_name": "Karim",
                "last_name": "Haddad",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1f556229c0742ef0586dd43d312f81a4?s=120&d=retro",
                "biography": "Karim Haddad was born in 1962 in Beirut Lebanon. He studied at the national conservatory there until it closed its doors in 1975 due to the civil war. He then went on to study philosophy and literature. Haddad received six awards from the CNSMD de Paris in addition to the Diplôme Supérieur de Composition with honors. He has worked with composers such as A.Bancquart, P. Mefano, K. Huber, and Emmanuel Nunes. This learning period is marked by his keen interest for non-tempered spaces and their strong relationship with temporal poetry. In 1992 and 1994 he took part in Ferienkursen für Musik in Darmstadt where he received a scholarship. In 1995, he took a class in computer music at IRCAM, and from that point on, the computer became the only tool he used for the elaboration of his works.\r\n\r\nAs a computer music expert, and more particularly as an expert in computer-assisted composition, in 2000 he is given the responsibility of technical support for the IRCAM Forum. He has developed several tools for the OpenMusic environment (synthesis control via Csound), as well as interfaces between this environment and score editors such as Finale and Lilypond.",
                "date_modified": "2026-02-18T11:08:17.096351+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 3,
                        "forum_user": 14,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 0,
                        "keys": [
                            {
                                "id": 544,
                                "membership": 3
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "haddad",
            "first_name": "Karim",
            "last_name": "Haddad",
            "bookmarks": []
        },
        "slug": "test-article-karim-haddad",
        "pk": 3251,
        "published": true,
        "publish_date": "2025-02-21T15:18:22+01:00"
    },
    {
        "title": "Logelloop 6 - Présentation des outils de spatialisation et de synthèse granulaire - Philippe Ollivier",
        "description": "Logelloop 6, sorti en janvier 2024, doté d'outils de transformation, de création de son, de spatialisation, d'enregistrement multicanal, est l’outil idéal pour composer ou improviser une musique électroacoustique en temps réel.",
        "content": "<p><img src=\"https://forum.ircam.fr/media/uploads/bandeaux_articles.png\" alt=\"\" width=\"990\" height=\"300\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br />Pr&eacute;sent&eacute; par: Philippe Ollivier<br /><a href=\"https://forum.ircam.fr/profile/Livfall/\">Biographie</a></p>\r\n<p></p>\r\n<p><br />Cette nouvelle version apporte l&rsquo;Acousmonium, un syst&egrave;me qui permet de spatialiser le son sur jusqu&rsquo;&agrave; 48 haut-parleurs et que vous disposez &agrave; votre guise dans un espace dont vous fixez les dimensions pour produire une spatialisation en 3D. Il est possible de d&eacute;placer les sources manuellement ou en utilisant des scripts ou automations de trajectoires.</p>\r\n<p><img alt=\"Logelloop 6 - Acousmonium\" src=\"https://forum.ircam.fr/media/uploads/user/bc513541acd98bafe27dc896110b8ba6.png\" /></p>\r\n<p>Logelloop 6 est &eacute;galement dot&eacute; d&rsquo;outils de synth&egrave;se granulaire multicanaux nativement pens&eacute;s pour une diffusion sonore sur un grand ensemble de haut-parleurs.</p>\r\n<p><img alt=\"Logelloop - Granular\" src=\"https://forum.ircam.fr/media/uploads/user/fe97ad2a5f81debd1ac89956f7a16a30.png\" /></p>\r\n<p>Il est possible de cr&eacute;er ses propres scripts pour configurer des interfaces graphiques permettant d&rsquo;associer des fonctions complexes &agrave; une seule action de l&rsquo;utilisateur.</p>\r\n<p>Logelloop est le compagnon id&eacute;al des musiciens int&eacute;ress&eacute;s par la spatialisation du son, des musiciens, des cr&eacute;ateurs sonores, des compositeurs &eacute;lectroacoustiques ou des r&eacute;gisseurs int&eacute;ress&eacute;s par un outil de cr&eacute;ation sonore et de spatialisation int&eacute;gr&eacute;e.</p>\r\n<p></p>\r\n<p><strong><a href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-de-lircam-2024-edition-speciale-les-30-ans/\">Retour &agrave; l'&eacute;v&eacute;nement&nbsp;</a></strong></p>",
        "topics": [
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1805,
                "name": "electroacoustic music",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 916,
                "name": "Forum Workshops",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1808,
                "name": "granular synthesis",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1728,
                "name": "IRCAM Forum Workshops 2024",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1800,
                "name": "Logelloop",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1804,
                "name": "loop",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1806,
                "name": "script",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 370,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1803,
                "name": "synthese granulaire ",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 60,
            "forum_user": {
                "id": 60,
                "user": 60,
                "first_name": "Philippe",
                "last_name": "Ollivier",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/69f9abc121aa273303fe779553bd1dc2?s=120&d=retro",
                "biography": "Philippe Ollivier compose avec le bandonéon et le logiciel Logelloop. Le dialogue entre l’instrument acoustique et l’électronique lui permet de se constituer un langage propre, en constante évolution.\n\nPoussé par une curiosité artistique protéiforme, le compositeur s’engage aussi dans une recherche plastique (photographie et vidéo-mapping) qui interroge le rapport du son à l’image et s’enrichit de collaborations avec le cirque, le théâtre et la danse contemporaine.\n\nIl aime investir des espaces naturels ou des lieux inattendus et répondre aux impulsions de leur musique propre. Ses créations interrogent notre rapport au temps et cherchent à susciter les décloisonnements tant sociaux qu’esthétiques.\nPhilippe Ollivier est aussi directeur artistique du Logelloù, lieu de création musicale en Côtes d’Armor et, avec Christophe Baratay, le concepteur de Logelloop.\n\nIntervenant professionnel pour le Master 2 « Image et Son » à l'Université de Bretagne Occidentale de Brest, il y enseigne la programmation Max.",
                "date_modified": "2025-10-27T18:43:17.121933+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Livfall",
            "first_name": "Philippe",
            "last_name": "Ollivier",
            "bookmarks": []
        },
        "slug": "logelloop-6-presentation-des-outils-de-spatialisation-et-de-synthese-granulaire-2",
        "pk": 2745,
        "published": true,
        "publish_date": "2024-02-16T08:57:09+01:00"
    },
    {
        "title": "Noire - expérience immersive - Novaya",
        "description": "Noire est une expérience immersive adaptée d'un essai biographique écrit par Tania de Montaigne, dont la première aura lieu au Centre Pompidou le 20 avril 2023.",
        "content": "<div class=\"page\" title=\"Page 2\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong>NOIRE</strong></p>\r\n<p><strong></strong></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>A Montgomery, Alabama, dans le bus de 14h30 le 2 mars 1955, Claudette Colvin, 15 ans, refuse de céder sa place à un passager blanc. Malgré les menaces, elle reste assise. Après avoir été jetée en prison, elle décide d'attaquer la ville et de plaider non coupable. Personne n'avait jamais osé faire ça. Et pourtant, personne ne se souviendra de son nom.</span></p>\r\n<p><span>Noire est une expérience immersive adaptée d'un essai biographique écrit par Tania de Montaigne, dont la première aura lieu au Centre Pompidou le 20 avril 2023.</span></p>\r\n<p><span></span></p>\r\n<p><span>L&rsquo;expérience se fait par groupes de dix personnes. Les visiteurs se préparent avec un équipement spécifique : un casque Hololens 2, un casque audio à conduction osseuse et un petit sac à dos. Ils pénètrent dans un décor que viendront bientôt hanter les fantômes de l'alabama des années 50...</span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>Prenez une profonde inspiration, soufflez, vous êtes désormais à Montgomery dans l&rsquo;Alabama des années cinquante. Regardez-vous, votre corps change, vous êtes dans la peau et l&rsquo;âme de Claudette Colvin, jeune fille noire de 15 ans à la vie sans histoire.</span></p>\r\n<p><span>Vous sortez des cours, vous attendez le bus, vous prenez votre ticket. Depuis toujours, vous savez qu&rsquo;être noir ne donne aucun droit, mais beaucoup de devoirs. Vous savez qu&rsquo;il y a les blancs d&rsquo;un côté et vous de l&rsquo;autre. Une fois que vous aurez pris votre ticket, vous ressortirez et monterez par la porte du fond. Une fois installée, vous savez aussi que, si un blanc n&rsquo;a pas de place assise, vous devrez lui céder la vôtre. Il en a toujours été ainsi à Montgomery.</span></p>\r\n<p><span>Seulement, </span><span>le 2 mars 1955, Claudette Colvin refuse de se lever. Malgré les menaces du chauffeur, qui est armé, malgré celles des autres passagers blancs et de certains passagers noirs</span><span>, elle reste assise. Mieux, après avoir été arrêtée et jetée en prison, elle décide d&rsquo;attaquer la ville et de plaider non coupable, c&rsquo;est une première. Et pourtant, personne ne retiendra son nom.</span></p>\r\n<p><span>C'est le début d&rsquo;un itinéraire qui mènera Claudette Colvin de la lutte à l&rsquo;abandon.</span></p>\r\n<p><span>Quand, 9 mois plus tard, Rosa Parks, couturière à la peau plus claire, fait le même geste que Claudette, tout change. Bientôt soutenue par un jeune pasteur récemment arrivé à Montgomery, Martin Luther King, Rosa Parks devient une héroïne, l&rsquo;étincelle qui lance le mouvement des droits civiques. L&rsquo;Histoire est en marche.</span></p>\r\n<p><span>Claudette Colvin a tout permis, mais elle est celle qu&rsquo;on a oubliée. Elle vit encore aujourd&rsquo;hui aux États-Unis. Elle a 82 ans.</span></p>\r\n<p><span>Publiée soixante ans après les faits, la biographie écrite par Tania de Montaigne, lauréate du prix Simone Veil 2015, nous plonge dans un moment de l&rsquo;histoire américaine des droits civiques qui ne cesse de resurgir dans notre actualité. Le sentiment qu&rsquo;être noir, c&rsquo;est être une race inférieure, à fortiori si on est une femme.</span></p>\r\n<p><span>Suite à l&rsquo;adaptation théâtrale de Stéphane Foenkinos pour laquelle Pierre-Alain Giraud a réalisé les films projetés sur scène, l&rsquo;écriture d&rsquo;une version immersive en réalité augmentée s&rsquo;est imposée comme un prolongement nécessaire et complémentaire afin de transmettre l&rsquo;histoire de Claudette Colvin et poursuivre l&rsquo;&oelig;uvre de réhabilitation initiée par Tania de Montaigne.</span></p>\r\n</div>\r\n</div>\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p></p>\r\n<div class=\"page\" title=\"Page 3\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><span>L'installation immersive Noire est présentée au Centre Pompidou en avril/mai 2023. Elle est produite par Novaya et le Centre Pompidou, coproduite par Flash Forward Enternainment (Taïwan) et avec le soutien du CNC, de la Région Rhône-Alpes Auvergne, de l'Institut français, ainsi que Taïwan Creative Content Agency.</span></p>\r\n<p><span>Dans un décor spécialement conçu pour l'expérience, se rejouent devant vous les scènes emblématiques de la vie de Claudette Colvin lors de la lutte pour les droits civils.</span></p>\r\n<p><span></span></p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [],
        "user": {
            "pk": 27625,
            "forum_user": {
                "id": 27597,
                "user": 27625,
                "first_name": "Nicolas",
                "last_name": "Aleksandrov",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/81fefeb138ceb9637e12a43a1640a81d?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-02-24T15:31:20+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "nite",
            "first_name": "Nicolas",
            "last_name": "Aleksandrov",
            "bookmarks": []
        },
        "slug": "noire",
        "pk": 2081,
        "published": true,
        "publish_date": "2023-02-24T17:10:43+01:00"
    },
    {
        "title": "\"to the dead poplar\" by Sam Erpelding (Luxembourg)",
        "description": "This audiovisual art installation investigates the relation between acoustic biological and human activity in protected ecosystems using high-resolution multi-channel sound recording and playback technics. The visualization of ecoacoustic indices in the form of modulated spectrograms provides crucial information about the ecological condition of landscapes.",
        "content": "<p></p>\r\n<p>This audiovisual art installation explores the relationship between acoustic biological and human activity in the Donau-Auen National Park (AT).&nbsp;<br />High-resolution multi-channel sound recordings and their playback make it possible to present the acoustic characteristics of wild meadows, floodplain forests, and riparian landscapes throughout the seasons. This allows the complex sounds of birds, insects, amphibians, and mammals, including &nbsp;sounds from trees, underground and aquatic organisms, to be represented.&nbsp;<br />In four chapters, the complexity of these natural sounds is artistically summarized in relation to the acoustic impacts of humans. Each chapter covers one season and leads the listener on a sound journey through the landscapes of the floodplains.&nbsp;<br />The spectrograms are modulated by video recordings of the corresponding habitats and by ecoacoustic indices. The higher the biological activity and the lower the human presence, the more colorful they appear.</p>\r\n<p>This ambisonics soundscape compositions can be played back on any multi-channel sound system or via binaural stereo over headphones. The videos can be played back with multiple projectors adapted to the venue, or with HD TV screens.&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><img alt=\"8-Channel Sound-Installation at Ars Electronica Festival 2024\" src=\"https://forum.ircam.fr/media/uploads/user/6cb064d61ea8d4738e85b599586370f1.jpg\" /></p>\r\n<p><img alt=\"16-Channel Sound Installation at Casino Display Luxembourg\" src=\"https://forum.ircam.fr/media/uploads/user/7655487ca73559e5a6b0c95782995eda.jpeg\" /></p>\r\n<p><img alt=\"4-Channel Sound-Installation at Nationpark Austria Visitor Center 2024\" src=\"https://forum.ircam.fr/media/uploads/user/dc9596078638a2559ce9c3650e44a57f.jpeg\" /></p>",
        "topics": [
            {
                "id": 623,
                "name": "Ambisonics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3583,
                "name": "Ecoacoustics",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 3584,
                "name": "IRCAM Forum Taipei",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 75,
                "name": "Jitter",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 24849,
            "forum_user": {
                "id": 24822,
                "user": 24849,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f7188201b09fab26364c12eac04e89a0?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-04T15:55:29.777769+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "samdankwart",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "to-the-dead-poplar-by-sam-erpelding-luxembourg",
        "pk": 3891,
        "published": true,
        "publish_date": "2025-10-27T10:32:23+01:00"
    },
    {
        "title": "R-IoT v3 : Commercial Release & Availability - Emmanuel FLETY, Prototypes & Engineering Team (PIP) / Marc SIRGUY (EOWAVE)",
        "description": "Presentation of the currently developed version (3) of the R-IoT wireless IMU sensor, designed for live performance, research and digital lutherie",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"><img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /><span>&nbsp;</span><label class=\"c-content__button-link-label\">Ircam Forum Workshops</label></a></div>\r\n<div class=\"c-content__button\"></div>\r\n<p>Presented by: Emmanuel FLETY, Prototypes &amp; Engineering Team (PIP) &amp; Marc SIGUY (EoWave)</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/flety/\">Biography</a></p>\r\n<p>&nbsp;</p>\r\n<p>The R-IoT wireless sensor platform is a small electronic board embedding 3D, 9-axis motion sensors, a baro-altimeter and a wireless microcontroller designed to make Open Sound Control based gestural sensing systems for research, motion analysis and performance arts controlling interactive contents and live electronics. We present the finalized version of the board and its distribution by the french company<span>&nbsp;</span><a href=\"https://www.eowave.com/\">EOWAVE</a>.</p>\r\n<p>We detail the firmware coding progress of the R-IoT inertial sensors system that can be usd to capture musical gesture applied to live performance and new instrument making. A particular focus will be shed on how to modify the firwmare for custom external sensors or needs as well a MIDI BLE applications.</p>\r\n<p style=\"text-align: justify;\"><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/e94682d837fb664d6af317b777f1dafe.jpg\" /><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/c522df7e76d33cea112ecbaad6e98128.jpg\" /></p>",
        "topics": [
            {
                "id": 2698,
                "name": "Gestural sensing",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2703,
                "name": "IRCAM Forum Workshops 2025",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 244,
                "name": "Open sound control",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 950,
                "name": "OSC ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 100,
                "name": "Sensor",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1913,
                "name": "WIFI",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 2699,
                "name": "Wireless IMU",
                "status": 1,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 9326,
            "forum_user": {
                "id": 9323,
                "user": 9326,
                "first_name": "Emmanuel",
                "last_name": "Flety",
                "avatar": "https://forum.ircam.fr/media/avatars/Flety_head3-removebg-preview1.png",
                "avatar_url": "/media/cache/61/45/614512f523bf49d2cd4c77f66e864c03.jpg",
                "biography": "Emmanuel FLETY is an electronics engineer at IRCAM and is in charge of the PIP Engineering and Prototype Team.  \nA specialist in embedded electronics, he has developed over the past twenty years expertise in digitization and acquisition \ninterfaces for miniaturized wireless sensors with low latency. \nThese are critical tools in the fields of motion capture and recognition, as well as in the creation of new gestural interfaces \nfor music and digital lutherie.  \n\nIn 2005, alongside his work at the Institute, he founded his own company, Plecter Labs, where he explores possible connections \nbetween the design of microcontroller boards and replicas of cinema props, thereby investigating tangible relationships between movement, \nsound, and light. A maker at heart, he enjoys exploring the poetic expression offered by unique interactive objects through a hands-on, \nartisanal approach.",
                "date_modified": "2025-02-27T11:24:46.657150+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 153,
                        "forum_user": 9323,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "flety",
            "first_name": "Emmanuel",
            "last_name": "Flety",
            "bookmarks": []
        },
        "slug": "r-iot-v3-commercial-release-availability-emmanuel-flety-prototypes-engineering-team-pip-marc-sirguy-eowave",
        "pk": 3312,
        "published": true,
        "publish_date": "2025-02-27T11:50:46+01:00"
    },
    {
        "title": "PanoLive Workshops - Jérôme LESUEUR",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>I would like to introduce you to my Max for Live device: <a href=\"https://forum.ircam.fr/projects/detail/panolive/\">[PanoLive]</a></p>\r\n<p>This is an integration of Panoramix in Ableton Live. Panoramix is developed by the Spat team and in particular Thibaut Carpentier.... Panoramix won the CNRS Crystal Medal. It is a 3D audio system that I was able to integrate into Live. The automation allows, via a VST plugin: OSCar, to send the coordinates of a sound source attached to a track in Live (via OSC). You can therefore use automations as trajectory managers in a 3D volume.</p>\r\n<p>PanoLiveControl is a client device that allows for automatic trajectory generation.</p>\r\n<p>I recently added binaural monitoring via a gyro sensor, which allows the panning headset to be rotated, which allows VR. It can handle up to 32 stereo tracks and 64 speakers (or even better with WFS).</p>\r\n<p>For the technologies, panoramix is a set of objects coming from Spat, the device being frozen it embeds the last version of the objects, so it doesn't need third party resources. OSC is used to manage the trajectories of the OSCar and the head tracker. The interface is quite intuitive and allows access to the panoramix console, OSC settings, viewer and EQ-Dyn independently. Some of the settings and defaults are from experimentation and feedback I've had.</p>\r\n<p>PanoLiveControl generates automatic trajectories with a two-axis description system and mathematical curve presets.</p>\r\n<p>The head tracker part consists of a conversion/driving application, the dialogue is done by OSC so that the tracking track can be slaved to the head movement. Two sensors are available on the market.</p>\r\n<p>So there is a dynamic use with the trajectories. And a static use where the sources are placed in space and the movement is that of the sensor. The monitoring is in binaural which allows to hear the effects of the trajectories.</p>\r\n<p>That's why the workshop takes place in studio 1 for the dynamic part. For the static aspect, I will share a headset with the sensor and a headphones, so you can test your own projects made with PanoLive.</p>\r\n<p>The management of the Panoramix session parameters file is saved in PanoLive so that it does not have to be loaded for each Live session.</p>\r\n<p>PanoLive is distributed with an example of a live set to explain the specific routing of live tracks.</p>\r\n<p>J&eacute;r&ocirc;me Lesueur</p>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: 204px; top: 35px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>",
        "topics": [
            {
                "id": 182,
                "name": "Audio 3D",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 551,
                "name": "Binaural",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 81,
                "name": "Panoramix",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 45,
                "name": "Spat5",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 1108,
                "name": "VR",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 24,
            "forum_user": {
                "id": 24,
                "user": 24,
                "first_name": "Jerome",
                "last_name": "Lesueur",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e9ddb73385f7551a5fbe34e729a02d3c?s=120&d=retro",
                "biography": "My name is Jérôme Lesueur, I am a composer, sound engineer, computer music designer, artist producer, conductor, and bass player\n\nI started music at 6 years old thanks to an elementary school that taught music through choir and piano. I always had a piano at home, and my mother taught me the basics\n\nI started playing bass guitar at the age of 12 and started playing in studios from the age of 17. This gave me the desire to learn More and I followed a Musical Bachelor at Sèvres (near Paris) with a high-level entrance exam. During these years, I took private lessons with Jeanne Lachaux in order to deepen what was done in class, and more during 6 years.\n\nAfter my Bachelor at the Lycée de Sèvres, I also took 8 years of private lessons with Gilbert Villedieu in analysis, writing, orchestration, and composition with a specialization in serial and post-serial music.\n\nI did the plenum of professional workshops at Ircam from 2005 to 2009, then a follow-up until 2015… And maybe in a close future… I followed a total of 30 professional workshops.\n\nMy interests are very diverse and I try to find solutions with my patches all distributed via the forum...\n\nI hope you like my patches",
                "date_modified": "2026-02-17T17:14:05.140577+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 7,
                        "forum_user": 24,
                        "date_start": "2023-02-07",
                        "date_end": "2025-02-28",
                        "type": 0,
                        "keys": [
                            {
                                "id": 542,
                                "membership": 7
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": false
                    }
                ]
            },
            "username": "smalllotus",
            "first_name": "Jerome",
            "last_name": "Lesueur",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 305,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 304,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 225,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 226,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 302,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 41,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 212,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 229,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 26,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 53,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 40,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 28,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 30,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 24,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 253,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 59,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 46,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 48,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 50,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 62,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 69,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 13,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 244,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 15,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 34,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 77,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 272,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 334,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 214,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 866,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 591,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 670,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 2029,
                    "user": 24,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 93,
                    "user": 24,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "panolive-workshops",
        "pk": 2029,
        "published": true,
        "publish_date": "2023-01-27T21:37:08+01:00"
    },
    {
        "title": "Spatial engine: A VR Cave audio system using spat~",
        "description": "Spatial audio implementation for Tilburg University's VR Cave. We use spat to control a 42.2 speaker dome with game engine's like Unity3D, and Unreal Engine. ",
        "content": "<p><strong>Studio Onno</strong> - Netherlands.</p>\n<p>ARTICLE IS NOT FINISHED YET, I WILL CONTINUE WRITING SHORTLY!</p>\n<p><strong>Introduction</strong></p>\n<p>TILBURG UNIVERSITY - DAF TECHNLOLOGY LAB - VR CAVE&nbsp;</p>\n<p><a href=\"https://www.tilburguniversity.edu/nl/campus/experiencing-virtual-reality\">Tilburg University</a> Understanding society is a big topic in the philosophy of the campus. To contribute to this goal, they are heavily investing in pioneering Virtual reality. Their latest investment: Two new VR caves! These caves will be mainly used to reproduce real world scenario's and measure the participants responses using bio sensors and tracking. Other usecases include:&nbsp;</p>\n<p>- Simulation for heart surgery</p>\n<p>- Scientific experiments (social, and technological)</p>\n<p><strong>Our role in this project</strong></p>\n<p>Tilburg University came to us for advice on realising spatial audio on a multichannel system. We designed a harware solution. The hardware setup: 42.2 Genelec POE loudspeaker system</p>\n<p><strong>Speaker grid based on the Thompson model.&nbsp;</strong></p>\n<p>The Thompon model is a mathmathical method to describe an equal distribution of point on a sphere.&nbsp; This same technology is also used by ambisonic microphones like the Eigenmike. We firured: Why not create a speaker setup in this configuration to recreate the perfect soundfield? And so we did. All the loudspeakers have a maximun distance of 1.20m. This is within the reach of generating continuous phantom imaging, even if you are standing very close to the screens. In other words: You get the most coverage, with the least amount of speakers needed.&nbsp;<img src=\"/media/uploads/user/0b2d80d3991ee178c899a837c50e99a6.png\" alt=\"\" width=\"1440\" height=\"945\" /> is both very immersive, and also allows for audio science to be done.&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>Spatial Audio system for virtual reality</strong></p>\n<p>The goal of this project was to create a way to use spatial audio within a game engine enviroment. Almost all gaming platforms support surround sound. Yet, this surround sound is limited to predefined formats such as 2.0, 5.1, or 7.1. We found in our research that it is simply not possible to send out an arbitrary number of speaker outputs. In order to achieve our goal, we needed to extend Unity3D's capability with true multichannel support.&nbsp;</p>\n<p>&nbsp;</p>\n<p>In collaboration with Tilburg University, we created a 42.2 Genelec loudspeaker Cave. Loudspeakers are behind 4 8K screens. The goal of this project is to render the audio to match the 3D cave video as much as possible. For this, we created an integrated way to use Ircam spat5 with Unity3D using OSC commands. We designed our patch to be optimal for use within a VR environment. The app needed to&nbsp; be simple to use for non-audio professionals, yet harness the full power of spat~</p>\n<p><strong>Hoa mixer</strong></p>\n<p>&nbsp;</p>\n<p><strong>3D visualiser for Unity3D</strong></p>\n<p><img src=\"/media/uploads/user/1b3db07612c78cfcd6e699e0684a0148.png\" alt=\"\" width=\"1440\" height=\"973\" /></p>\n<p>&nbsp;</p>\n<p><strong>Features</strong><br />- Reverb zones based on player location<br />- Relative listener position<br />- Ambisonics for background ambience<br />- VBAP3D for precise localisation of sources<br />- 3D visualiser that works in total sync with Unity3D (or Unreal Engine)<br />- Head-tracked binaural for VR headsets <br />- Transport system for Unity3D, and the ability to load game scenes. <br />- Dynamic adaptation to the number of voices in your game Engine<br />- Also available as a Max for Live device</p>",
        "topics": [],
        "user": {
            "pk": 18283,
            "forum_user": {
                "id": 18276,
                "user": 18283,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/269686df02292d69513cf75245f9a55e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-10-15T16:03:36.117486+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "marijn",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "spatial-engine-a-vr-cave-audio-system-using-spat",
        "pk": 938,
        "published": false,
        "publish_date": "2021-03-11T11:52:00.771664+01:00"
    },
    {
        "title": "Resounding Bodies -  A multimodal compositional approach by Alberto Gatti",
        "description": "",
        "content": "<div class=\"c-content__button\"><a class=\"c-content__button-link\" href=\"https://forum.ircam.fr/collections/detail/les-ateliers-du-forum-ircam-a-paris-du-26-au-28-mars-2025/\"> <img class=\"c-content__button-link-icon\" src=\"https://forum.ircam.fr/static/icons/arrow-left-white.svg\" alt=\"arrow-left-white\" /> <label class=\"c-content__button-link-label\">Ircam Forum Workshops</label> </a></div>\r\n<div class=\"c-content__button\"></div>\r\n<div id=\"gtx-trans\" style=\"position: absolute; left: -100px; top: -20.0052px;\">\r\n<div class=\"gtx-trans-icon\"></div>\r\n</div>\r\n<p><span><img src=\"/media/uploads/cellist.jpg\" alt=\"\" width=\"954\" height=\"537\" />&nbsp;<img src=\"https://forum.ircam.fr/media/uploads/dscf5139.jpg\" alt=\"\" width=\"662\" height=\"441\" /><span>&nbsp;</span></span></p>\r\n<p><span></span>Presented by Alberto Gatti</p>\r\n<p><a href=\"https://forum.ircam.fr/profile/gatti/\" target=\"_blank\">Biography</a></p>\r\n<p><span>The use of vibrating transducers has seen a variety of applications in recent years, ranging from the electroacoustic to the strictly artistic. In particular, bone conduction of sound has undergone major developments, drawing attention to a new idea of sound perception. The problem behind this practice often stems from the difficulty of organizing the sound con- tent to be broadcast with vibrating transducers, often ill-suited to faithful sound restitu- tion. The aim of this project is to create software for real-time analysis and automatic adaptation of sound content on devices involving one or more vibrating transducers, thus also resolving the management of spatial sound diffusion. To this end, the software also provides a sound flow control system using motion sensors applied to users or potential performers. The outcome of the project will be the application of a tool capable of study- ing the relationship between audio-tactile musical perception during a live performance, exploiting a hybrid bone-cranial transduction system side-by-side with a traditional multi- channel system.</span></p>\r\n<p><span></span></p>\r\n<p><img src=\"/media/uploads/dscf5166.jpg\" alt=\"\" max-width=\"1300\" max=\"\" wind-=\"\" height=\"866\" /><br /><span></span></p>",
        "topics": [],
        "user": {
            "pk": 87933,
            "forum_user": {
                "id": 87829,
                "user": 87933,
                "first_name": "Alberto",
                "last_name": "Gatti",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/c1dbe972892e0d5cb179d0cfd3a75585?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-08-06T17:33:40.740836+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 963,
                        "forum_user": 87829,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-23",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "gatti",
            "first_name": "Alberto",
            "last_name": "Gatti",
            "bookmarks": []
        },
        "slug": "resounding-bodies-a-multimodal-compositional-approach-by-alberto-gatti",
        "pk": 3320,
        "published": true,
        "publish_date": "2025-03-12T17:10:38+01:00"
    },
    {
        "title": "Embodiement of a Decentralised Sonic Space - Aiden Shabka, Udit Datta, Nicholas Farris",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris",
        "content": "<p>This Project focuses on the decentralisation of spatial/immersive audio. What does spatial audio feel like when resonance is explored fragmented across a building or through an assemblage of sculptural objects? How does the interaction of individuals connect to form a symbiotic sonic environment? We would like the experiencer to feel the closeness of the origin of soundscapes and its intrinsic capacity to shape an ecology of beings.&nbsp; Through a loose and entropic distribution of transducers and contact mics we seek to tap into the intuitive nature of creation in communication with the textures and materiality of the lived environment. The result is a constantly evolving superposition in time of soundscapes both natural and alien to our senses toeing the line of embodiment and dissolution in an act of radical togetherness.<br />Exploring the resonant properties of materials we collectively craft an interactive sonic sculpture park unique to the passing moment. By providing each audience member with their own contact speaker, they will have autonomy to place the speaker on sculptures and surfaces of a room creating an evolving and living sonic environment. Through the use of physical computing and Maxmsp, the soundscape is perpetually evolving through the data input of the audience movement, further solidifying the role of the individual in the sonic outcome of the piece.</p>\r\n<p>Project by Udit Datta &amp; Aiden Shabka,&nbsp;Nicholas Farris</p>",
        "topics": [],
        "user": {
            "pk": 27466,
            "forum_user": {
                "id": 27438,
                "user": 27466,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/992209e1e1841733c7a8647acd01b437?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "shabka",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "embodiement-of-a-decentralised-sonic-space-aiden-shabka-udit-datta-nicholas-farris",
        "pk": 2140,
        "published": true,
        "publish_date": "2023-03-15T11:52:59+01:00"
    },
    {
        "title": "OpenMusic 7.1 release",
        "description": "Latest version of OpenMusic, the Computer Assisted Composition environment.",
        "content": "<div>\r\n<p><a href=\"https://github.com/openmusic-project/openmusic/releases/tag/v7.1\">OpenMusic 7.1 </a>is freely available for download</p>\r\n<p>OPERATING SYSTEMS</p>\r\n<ul>\r\n<li>MacOS: 64bits - ARM and Intel processors,</li>\r\n<li>WINDOWS: 32 bits</li>\r\n<li>LINUX: 64 bits RPM and DEB packages, tar-ball</li>\r\n</ul>\r\n<p>NEW FEATURES</p>\r\n<ul>\r\n<li>New score functions:<br />set-obj-pitch, set-obj-vel, set-obj-chan, set-obj-port, set-obj-tempo</li>\r\n<li>Jack support for linux (V. Anders)</li>\r\n</ul>\r\n<p>IMPROVEMENTS</p>\r\n<ul>\r\n<li>shortcut for curved/straight connections (z)</li>\r\n<li>scale setting is in preferences</li>\r\n<li>alt+selection selects also connections</li>\r\n<li>concat-score-objs now replaces concat-voices and covers list of chord-seqs or multi-seqs or voices or polys</li>\r\n<li>comment boxes are resized correctly</li>\r\n<li>info definition (inspection of code) is now resizable</li>\r\n<li>key shortcuts for recording midi (q and w)</li>\r\n<li>infoeditor remembers size and position</li>\r\n<li>PortMidi setup's height is resizable (preferences)</li>\r\n<li>om-inspect improved: resizable and closes all inspectors windows</li>\r\n<li>cut/copy/paste in temporal box info</li>\r\n<li>box resize shortcut (ctrl/cmd+shift+arrows)</li>\r\n<li>slotboxes are callable just as any ombox</li>\r\n<li>scorepatch :\r\n<ul>\r\n<li>connection standard display</li>\r\n<li>fixed zoom display</li>\r\n</ul>\r\n</li>\r\n<li>OK button on portmidi setup panel</li>\r\n<li>auto connect output/input using option+cmd (ctrl+alt) and more</li>\r\n</ul>\r\n<p>FIXES</p>\r\n<ul>\r\n<li>Record chord-seq restaured with new recording modes</li>\r\n<li>Tuplets optimization in mxml (bugfix)</li>\r\n<li>Fixed list presentation of workspace at startup</li>\r\n<li>Drag&amp;drop score instances in maquette now works</li>\r\n<li>micro-channel approx fix (16th tones)</li>\r\n<li>MAQUETTE loop play mode fixed</li>\r\n<li>omaudiolib fix for windows</li>\r\n<li>Dark mode support (aqua display) for mac</li>\r\n<li>Fixed closing instances (automatic closing of editors when instances are deleted)</li>\r\n<li>set-obj-mode fix (internal chord)</li>\r\n</ul>\r\n<p></p>\r\n<p><video width=\"300\" height=\"150\" style=\"display: block; margin-left: auto; margin-right: auto;\" controls=\"controls\">\r\n<source src=\"https://forum.ircam.fr/media/uploads/shortcuts_vid.mp4\" type=\"video/mp4\" />\r\n<source src=\"https://forum.ircam.fr/media/uploads/shortcuts_vid.mp4\" type=\"video/mp4\" /></video></p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n</div>",
        "topics": [
            {
                "id": 954,
                "name": "CAC",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 955,
                "name": "Computer Assisted Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 956,
                "name": "programing",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 14,
            "forum_user": {
                "id": 14,
                "user": 14,
                "first_name": "Karim",
                "last_name": "Haddad",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1f556229c0742ef0586dd43d312f81a4?s=120&d=retro",
                "biography": "Karim Haddad was born in 1962 in Beirut Lebanon. He studied at the national conservatory there until it closed its doors in 1975 due to the civil war. He then went on to study philosophy and literature. Haddad received six awards from the CNSMD de Paris in addition to the Diplôme Supérieur de Composition with honors. He has worked with composers such as A.Bancquart, P. Mefano, K. Huber, and Emmanuel Nunes. This learning period is marked by his keen interest for non-tempered spaces and their strong relationship with temporal poetry. In 1992 and 1994 he took part in Ferienkursen für Musik in Darmstadt where he received a scholarship. In 1995, he took a class in computer music at IRCAM, and from that point on, the computer became the only tool he used for the elaboration of his works.\r\n\r\nAs a computer music expert, and more particularly as an expert in computer-assisted composition, in 2000 he is given the responsibility of technical support for the IRCAM Forum. He has developed several tools for the OpenMusic environment (synthesis control via Csound), as well as interfaces between this environment and score editors such as Finale and Lilypond.",
                "date_modified": "2026-02-18T11:08:17.096351+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 3,
                        "forum_user": 14,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 0,
                        "keys": [
                            {
                                "id": 544,
                                "membership": 3
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "haddad",
            "first_name": "Karim",
            "last_name": "Haddad",
            "bookmarks": []
        },
        "slug": "openmusic-71-release",
        "pk": 1406,
        "published": true,
        "publish_date": "2022-10-10T17:46:39+02:00"
    },
    {
        "title": "NYU Course Semester at IRCAM",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p>Each year, for one year, 10 young composers from all over the world are selected in order to follow the program of composition and acquire real, technical autonomy with a number of computer programs. At the end of the program the students have to compose &agrave; mixed music using a multitude of forms: solo musician and electronics, an electroacoustic work, a sonic installation, or including other media such as dance, text, or images presented in concert at the beginning of the season in September.</p>\r\n<p>This course is aimed at students of NYU music technology and composition. A semester of fourteen day-long sessions focused primarily on sound design and composition in real time with Max (Cycling &rsquo;74 / IRCAM), including sessions with IRCAM software such as ASAP and Partiel and/or Antescofo, and presentations from IRCAM&rsquo;s R&amp;D departments. Students are expected to do a project using Max: a real-time composition, performance, installation or audio tool that will be presented at the end of the semester.</p>",
        "topics": [],
        "user": {
            "pk": 14664,
            "forum_user": {
                "id": 14661,
                "user": 14664,
                "first_name": "Philippe",
                "last_name": "Langlois",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_E1985.jpg",
                "avatar_url": "/media/cache/27/e9/27e9bea3bd17be16bb8f16a4ff2dbaa8.jpg",
                "biography": "Philippe Langlois is doctor in musicology, specialized in  historical relations between electroacoustic music and cinema. He published his PHD under the name The bells of Atlantis, published (in french) at mf editions in 2012 and republished in 2022. From 2001 until 2011 he was artistic producer of the renowned national radio broadcast « workshop for radio creation » on France Culture. \nIn parallel, he cofounded and have teach in the master degree in Sound Design at the Fine Art School of Le Mans (France).  \nHe also compose musics for authors documentary films, installations and experimental cinema. \nFrom 2017, he heads the department of education and documentation in Ircam.",
                "date_modified": "2024-08-30T16:19:39.396482+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 515,
                        "forum_user": 14661,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "Langlois",
            "first_name": "Philippe",
            "last_name": "Langlois",
            "bookmarks": []
        },
        "slug": "ircam-cursus-on-composition-and-computer-music",
        "pk": 1331,
        "published": true,
        "publish_date": "2022-09-13T12:14:14+02:00"
    },
    {
        "title": "River's Odyssey or 'Ile et Une Nuit' - Savyna Indranee Darby, Joseph Whitmore, Bryan Yuenshen Wu, Dragon",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris",
        "content": "<p><span>River&rsquo;s Odyssey tells the story of a wandering child named River, who sets out bravely on a journey to find Beeja (camphor tree), in a world almost submerged by the rising ocean tides. The river he travels by is an overspill of seawater, replete with plants that have navigated, adapted to and are now thriving in an ever- changing landscape on the brink. Wildlife is present. Heard but not seen. </span></p>\r\n<p><span>The dialogue River and Beeja exchange, via a transistor radio at first, &ldquo;trees communicate at 220 hz frequency&rdquo;, offers an insight into the connections between child and tree, through ancestral care, the breath they exchange and the site-specific memories they co-create. It touches on themes of colonialism, cultivation and creolisation, leaning on reciprocation, intergenerational and interspecies knowledge that is gained through communication. The chrysalis-like hollow in the tree becomes an Immersive environment for deep listening, quantum storytelling, exploration, magical thinking and metamorphosis. Time spent in there enables the flowering of a different kind of connectedness, where a being who enters it, physically or spiritually, is never quite the same when they leave, forever embedded with a seed of intent and purpose. </span></p>\r\n<p><span>It is presented as a 3-part experience, first with a spatial sound piece called \"Ile et Une Nuit\", then an audio-visual piece built in Unreal Engine. Where that story ends in the digital realm, a new one begins in the hollow of a physical miniature garden, conjoined as a plural multi-sensory installation with a bonsai tree in full bloom. Part 3, is a VR piece, where the viewer is invited to be immersed in the botanical garden, the site for River's Odyssey, feeling nature and being connected to the Camphor Tree and hollow, by the breath connection.The VR piece is designed and augmented with digital sculptures for Dragon Chen's piece, 'The Stone-cene', to create a hybrid, symbiotic space conducive for coexistence among all species. </span></p>\r\n<p><span>River's Odyssey is Written and Directed by Savyna, in collaboration with <a href=\"https://forum.ircam.fr/profile/studioubl/\">Joseph Whitmore</a>, Bryan Yueshen Wu and Dragon Chunyi Shen. Performed by Oscar Rai (River), Titreranjan Mandil (Beeja) and Tara Lee (Mother Nature). Collaborators on the installation are Zacharias Wolfe (music), Eva Mandula &amp; Vazul K&ouml;l&egrave;s (Sculpture) and Kristof Nov&agrave;k (Cinematographer).</span></p>",
        "topics": [],
        "user": {
            "pk": 27379,
            "forum_user": {
                "id": 27351,
                "user": 27379,
                "first_name": "Savyna Indranee",
                "last_name": "Darby",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6cb6c7f51bd438a83903e615ccd888e1?s=120&d=retro",
                "biography": "Savyna Indranee is a film producer, sound artist, writer and plural storyteller with a penchant for arboreal beings, their sacrality/numosity and the fragile connections humans share with them through colonialism, diasporic spaces, memory, play, smell and sense of wonder. \n\nHer work at the RCA has led her to explore new modes of inquiry in multi-sensory storytelling with a penchant for post-colonial science fiction and speculative fabulation. The stories she tells are set in a botanical garden in Mauritius, around a Camphor tree as praxis, where natural and digital environments collide to (re)create a worlding phenomena.\n\nAs a film-maker, she has just co-written 'The boy who belonged to the Sea', a script adapted from Denis Thériault's critically acclaimed novel. Determined to practise a kinder approach to film making & production, she is passionate about telling critically compelling stories without compromising on the health of the planet and her inhabitants, living and non-living.The sea, especially, is close to her heart and its wellbeing, the driving force behind producing her first feature film. \n\nIRCAM has given her the opportunity to deep-dive into the world of sound.",
                "date_modified": "2023-03-24T20:39:05+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "savyna",
            "first_name": "Savyna Indranee",
            "last_name": "Darby",
            "bookmarks": []
        },
        "slug": "rivers-odyssey-or-ile-et-une-nuit-savyna-indranee-darby-joseph-whitmore-bryan-yuenshen-wu-dragon",
        "pk": 2155,
        "published": true,
        "publish_date": "2023-03-23T10:57:10+01:00"
    },
    {
        "title": "L'étrangeté perceptive en réalité virtuelle",
        "description": "Cette installation en réalité virtuelle place la voix au cœur de l'interaction multisensorielle dans le cadre d'un test perceptif scientifique combiné à une expérience artistique immersive.",
        "content": "<p style=\"text-indent: 20px; text-align: justify;\">Les vid&eacute;os &agrave; 360&deg; en r&eacute;alit&eacute; virtuelle (RV) pr&eacute;sentent certaines limites, comme l'impossibilit&eacute; d'effectuer des d&eacute;placements dans l'environnement par la volont&eacute; propre du spectateur ni des interactions physiques directes avec des objets virtuels. La voix du spectateur pourrait &ecirc;tre un moyen d'augmenter cette interactivit&eacute; puisqu'elle ne n&eacute;cessite pas de jeu physique mais pourrait suffire &agrave; produire des cons&eacute;quences physiques dans le monde virtuel. Dans cette recherche artistique &eacute;labor&eacute;e dans le cadre d'une r&eacute;sidence &agrave; l'IRCAM en 2019-2020, nous avons cherch&eacute; &agrave; &eacute;valuer la valeur ajout&eacute;e d'interactivit&eacute; lorsque le spectateur utilise sa propre voix, en incluant des transformations en temps r&eacute;el de timbre et de spatialisation pour l'int&eacute;grer dans un sc&eacute;nario au contexte futuriste &agrave; travers un dialogue avec une intelligence artificielle ayant pris forme humaine. En effet, un test de Turing invers&eacute; fictif sert de pr&eacute;texte &agrave; cette interaction dialogu&eacute;e et doit permettre d'&eacute;valuer de mani&egrave;re fictive notre propre degr&eacute; d'humanit&eacute;. Un test scientifique sur la perception en RV est r&eacute;alis&eacute; en parall&egrave;le &agrave; cette interaction afin d'&eacute;valuer si la qualit&eacute; de la voix du spectateur transform&eacute;e en temps r&eacute;el lui permet d'incarner davantage son personnage dans la fiction en RV, en jouant sur l'effet d'&eacute;tranget&eacute; que ces transformations peuvent g&eacute;n&eacute;rer.</p>\r\n<p style=\"text-align: justify;\"><a href=\"/media/uploads/user/etrangete_perceptive_en_rv.pdf\">[T&eacute;lecharger l'article au format pdf]</a></p>\r\n<p style=\"text-align: justify;\"><video width=\"300\" height=\"150\" controls=\"controls\">\r\n<source src=\"/media/uploads/user/forum_extrait_turing.mp4\" type=\"video/mp4\" /></video></p>\r\n<div style=\"text-align: justify;\">Film&eacute; pendant les Ateliers du Forum 2020</div>\r\n<h6 style=\"text-align: justify;\"></h6>\r\n<h3 style=\"text-align: justify;\"><strong>Pr&eacute;ambule</strong></h3>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Cet article d&eacute;taille les enjeux scientifiques soulev&eacute;s dans le cadre d'une exp&eacute;rience de perception multisensorielle en RV, les propositions artistiques relevant d'un contexte futuriste et anthropoc&eacute;nique ins&eacute;r&eacute;es dans une narration en RV, ainsi que les choix technologiques (vid&eacute;o &agrave; 360&deg; immersive, son 3D ambisonique et binaural) effectu&eacute;s lors de notre r&eacute;sidence en recherche artistique &agrave; l'IRCAM en 2019-2020. L'installation d&eacute;velopp&eacute;e en RV &agrave; l'issu de la r&eacute;sidence et pr&eacute;sent&eacute;e lors du Forum IRCAM du 4 au 6 mars 2020 est un premier aboutissement du projet. Celui-ci continue &agrave; &ecirc;tre d&eacute;velopp&eacute; actuellement, d'une part dans un cadre scientifique (test perceptif), d'autre part dans un cadre artistique (installation interactive et film autonome en RV), non pas de mani&egrave;re cloisonn&eacute;e mais par une &eacute;mulation forte et inspirante entre science et art.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<h3 style=\"text-align: justify;\"><strong>Le test de Pieter Musk</strong></h3>\r\n<h4 style=\"text-align: justify;\"><strong><em>Enjeux scientifiques et technologiques</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Trois grands enjeux &agrave; la fois scientifiques et technologiques ont nourri l'exp&eacute;rience perceptive d&eacute;velopp&eacute;e en RV&nbsp;: l'&eacute;tranget&eacute; perceptive, l'adaptation perceptive de contenus sonores et visuels en RV et l'utilisation de la voix pour interagir en RV. Trois axes de recherche en ont d&eacute;coul&eacute; et nous ont inspir&eacute; alternativement ou simultan&eacute;ment lors du d&eacute;veloppement de notre installation.</p>\r\n<p style=\"text-align: justify;\"><u>L'&eacute;tranget&eacute; perceptive :</u></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">L'&eacute;tranget&eacute; perceptive est un concept dont les enjeux se font de plus en plus pr&eacute;gnants &agrave; l'heure actuelle de l'av&egrave;nement des assistants vocaux, des agents conversationnels incarn&eacute;s ou encore des robots.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pour saisir facilement ce concept, on peut par exemple s'appuyer sur la litt&eacute;rature fantastique : dans <em>Le marchand de sable </em>(1815) d'E.T.A. Hoffmann, un jeune gar&ccedil;on est &eacute;pris d'une jeune fille. Cependant, il la trouve troublante sous bien des aspects (froideur au contact de sa peau, visage impassible...). A la fin de cette nouvelle, il r&eacute;alisera que la jeune fille n'&eacute;tait pas un &ecirc;tre humain mais un robot d&eacute;velopp&eacute; &agrave; la perfection par son \"p&egrave;re\" physicien. L'&eacute;tranget&eacute; ou l'inqui&eacute;tude que peut provoquer cette familiarit&eacute; humaine a &eacute;t&eacute; &eacute;tudi&eacute;e par la suite notamment en psychologie (e.g. Freud, 1919).</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">En 1970, le roboticien Mori propose l'hypoth&egrave;se de la vall&eacute;e de l'&eacute;tranget&eacute; (<em>uncanny valley</em>). Selon son hypoth&egrave;se, moins une entit&eacute; est similaire &agrave; un &ecirc;tre humain, moins la r&eacute;action qu'elle provoque est forte (e.g. r&eacute;action &eacute;motionnelle...), et inversement, plus la similarit&eacute; augmente, plus la r&eacute;action est forte. Mais cette relation n'est pas lin&eacute;aire : elle pr&eacute;sente un creux appel&eacute; \"vall&eacute;e de l'&eacute;tranget&eacute;\" (Fig. 1). Ce creux indique que lorsque la similarit&eacute; est assez forte mais n&eacute;anmoins imparfaite, l'entit&eacute; quasi-humaine peut provoquer une r&eacute;action tr&egrave;s n&eacute;gative (e.g. aversion). Les interpr&eacute;tations pour expliquer une telle r&eacute;action n&eacute;gative sont multiples : on peut se demander si la personne est morte, sujette &agrave; un pathog&egrave;ne, &agrave; une absence perturbante de d&eacute;fauts, ou encore d'un point de vue cognitif, il pourrait s'agir d'un conflit entre les indices perceptifs mis en jeu, on croirait reconna&icirc;tre un &ecirc;tre humain mais on n'en serait pas s&ucirc;r et notre syst&egrave;me perceptif s'en trouverait perturb&eacute;.</p>\r\n<p style=\"text-align: center;\"><img src=\"/media/uploads/user/9a1e6fa2f2de718685c5bfbda0796581.png\" alt=\"Fig1\" width=\"425\" height=\"229\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 1. Vall&eacute;e de l'&eacute;tranget&eacute;. </strong>D'apr&egrave;s Mathur &amp; Reichling (2016).</div>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Un certain nombre d'auteurs ont test&eacute; cette hypoth&egrave;se dans un cadre scientifique, en particulier pour &eacute;valuer si des robots andro&iuml;des, g&eacute;n&eacute;ralement trop lisses pour &ecirc;tre v&eacute;ritablement humains, pouvaient g&eacute;n&eacute;rer ce type de r&eacute;actions n&eacute;gatives, ce qui s'av&egrave;rerait tr&egrave;s dommageable pour l'industrie robotique (Fig. 2).</p>\r\n<h6 style=\"text-align: justify;\"></h6>\r\n<p><img src=\"/media/uploads/user/capture_d&rsquo;&eacute;cran_2020-04-14_&agrave;_10.12.34.png\" alt=\"\" width=\"424\" height=\"401\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 2. Exemples d'&eacute;tudes r&eacute;centes sur la perception de l'&eacute;tranget&eacute;.</strong></div>\r\n<div style=\"text-align: center;\">Dans chacune de ces &eacute;tudes, il s'agit d'&eacute;valuer la r&eacute;action que provoque une entit&eacute; andro&iuml;de compar&eacute;e &agrave; un &ecirc;tre humain. D'apr&egrave;s : &agrave; gauche : Chattopadhyay &amp; MacDorman (2016) ; &agrave; droite : Mathur &amp; Reichling (2016)&nbsp;; en bas&nbsp;: Ferrey et al. (2015).</div>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Le sc&eacute;nario artistique de notre installation inclut une conversation avec une intelligence artificielle incarn&eacute;e. Il s'agira de trouver la meilleure repr&eacute;sentation visuelle et auditive de cette entit&eacute; pour la rendre la plus cr&eacute;dible et pertinente possible dans le cadre de notre fiction, potentiellement en jouant sur un effet d'&eacute;tranget&eacute; chez le participant.</p>\r\n<p style=\"text-align: justify;\"><u>Coh&eacute;rence sonore et visuelle des contenus artistiques en RV :</u></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">La RV n&eacute;cessite &agrave; l'heure actuelle des ordinateurs puissants et co&ucirc;teux. En particulier, comme pour les jeux vid&eacute;o, les rendus visuels par synth&egrave;se d'environnements virtuels sont complexes &agrave; mettre en &oelig;uvre (c'est moins vrai pour les captations en 360&deg; immersive), tandis que le son est g&eacute;n&eacute;ralement beaucoup plus simple &agrave; g&eacute;n&eacute;rer, transformer et diffuser en RV pour obtenir un rendu coh&eacute;rent avec l'intention artistique. Le d&eacute;faut de r&eacute;alisme de l'environnement virtuel n'est pas forc&eacute;ment un probl&egrave;me en soi, car la narration suffit souvent &agrave; &ecirc;tre pleinement immerg&eacute; dans l'exp&eacute;rience en RV. L'enjeu r&eacute;side donc surtout dans la coh&eacute;rence de l'environnement virtuel visuel et auditif avec l'intention artistique.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Cependant, du fait de la forte divergence des processus de cr&eacute;ations en image et son, on risque de g&eacute;n&eacute;rer une incongruence perceptive g&ecirc;nante en RV (le cas limite &eacute;tant un rendu visuel non-r&eacute;aliste, car complexe &agrave; mettre en &oelig;uvre, associ&eacute; &agrave; un contenu sonore r&eacute;aliste, car obtenu et manipul&eacute; facilement). C'est pourquoi, au lieu de proposer une complexification du processus de cr&eacute;ation de l'image, souvent aux d&eacute;pends de l'intention artistique, on peut proposer une \"d&eacute;gradation\" du contenu sonore pour tendre vers une meilleure congruence perceptive visuo-auditive et une meilleure exp&eacute;rience immersive en RV.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Dans un travail ant&eacute;rieur, nous avions propos&eacute; des \"esquisses auditives\" de sons complexes comme pendant d'esquisses visuelles, cr&eacute;&eacute;es sur la base des indices de reconnaissance auditive (Fig. 3 ; cf. Isnard, 2016 ; Isnard et al., 2016).</p>\r\n<p><img src=\"/media/uploads/user/bc7c0b759a049f39b4a913792868c8e3.png\" alt=\"Fig3\" width=\"327\" height=\"248\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 3. Repr&eacute;sentation sch&eacute;matique de l'incongruence (&agrave; gauche) et de la congruence (&agrave; droite) visuo-auditive qui peuvent &ecirc;tre g&eacute;n&eacute;r&eacute;es en fonction du niveau d'adaptation des contenus image et son.</strong></div>\r\n<div style=\"text-align: justify;\"><strong></strong></div>\r\n<div style=\"text-align: justify;\"><strong> </strong></div>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Peu de traits visuels suffisent &agrave; reconna&icirc;tre un objet visuel (ici un visage). De la m&ecirc;me mani&egrave;re, le syst&egrave;me auditif humain peut se contenter de peu de traits auditifs pour reconna&icirc;tre un son. On fait l'hypoth&egrave;se que la congruence visuo-auditive est meilleure dans le cas o&ugrave; les complexit&eacute;s des objets visuel (e.g. visage) et auditif (e.g. voix) sont adapt&eacute;es.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Dans le cadre de notre installation en RV, les contenus image et son seront transform&eacute;s pour correspondre &agrave; l'intention artistique et au contexte futuriste et anthropoc&eacute;nique du sc&eacute;nario. Il s'agira d'abord d'adapter l'image et le son en correspondance pour favoriser la congruence visuo-auditive et une meilleure exp&eacute;rience d'immersion en RV.</p>\r\n<p style=\"text-align: justify;\"><u>L'utilisation de la voix pour interagir en RV :</u></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Les vid&eacute;os &agrave; 360&deg; en RV ne permettent ni d&eacute;placement ni interaction. L'image est capt&eacute;e en 360&deg; et le participant qui la visualise peut seulement tourner sur 360&deg; pour observer toute la sc&egrave;ne immersive. Son int&eacute;r&ecirc;t est qu'elle est relativement simple &agrave; mettre en &oelig;uvre (en comparaison au d&eacute;veloppement d'un environnement de synth&egrave;se en 3D) et qu'elle permet d'obtenir une qualit&eacute; parfaitement r&eacute;aliste comme mati&egrave;re brute avant traitements.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Certaines propositions sont en d&eacute;veloppement pour pallier &agrave; ces limites. Par exemple des r&eacute;seaux de cam&eacute;ras <em>light fields</em>, ou encore une reconstitution de l'effet de parallaxe sur une image initialement monoscopique &agrave; l'aide de techniques computationnelles (Fig. 4).</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\"></p>\r\n<p><img src=\"/media/uploads/user/c9216ed3d9af47eba1f48a8d80092988.png\" alt=\"Fig4a\" width=\"357\" height=\"198\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<p><img src=\"/media/uploads/user/c8836d7d35d77b5f50a2bc45f8a4d2d3.png\" alt=\"Fig4b\" width=\"505\" height=\"179\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 4. Exemples de techniques pour proposer un d&eacute;placement du corps en RV &agrave; 360&deg;.</strong></div>\r\n<div style=\"text-align: center;\">En haut : r&eacute;seau de cam&eacute;ras <em>light fields </em>(Google) ; en bas : processus de synth&egrave;se de l'effet de parallaxe sur une image monoscopique (d'apr&egrave;s Dinechin &amp; Paljic, 2018).</div>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">On peut cependant &eacute;galement s'inspirer de l'interactivit&eacute; propos&eacute;e par les \"agents virtuels incarn&eacute;s\", pour lesquels la voix, les gestes ou le regard du participant peuvent &ecirc;tre impliqu&eacute;s dans une interaction avec ce type d'agents virtuels (Fig. 5).</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\"></p>\r\n<p><img src=\"/media/uploads/user/fig5.png\" alt=\"\" width=\"1008\" height=\"321\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 5. Exemples d'agents conversationnels incarn&eacute;s.</strong> D'apr&egrave;s : &agrave; gauche : Kopp et al. (2003) ; au milieu : Tamagawa et al. (2011) ; &agrave; droite : Baur et al. (2013).</div>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Dans notre installation, nous avons opt&eacute; pour l'utilisation de la voix du participant, qui repr&eacute;sente selon nous une mani&egrave;re peu co&ucirc;teuse, simple et efficace pour am&eacute;liorer l'interactivit&eacute; en RV &agrave; 360&deg;. De plus, nous proposons d'ajouter des transformations en temps r&eacute;el de la voix propre du participant pour lui permettre d'incarner au mieux un personnage de fiction (par exemple, si le participant devait incarner un monstre dans un jeu vid&eacute;o, on lui proposerait de transformer sa propre voix en temps r&eacute;el en voix de monstre pour qu'il incarne au mieux son personnage). Cependant, on pourra se demander si de telles transformations ne seront pas susceptibles de g&eacute;n&eacute;rer un effet d'&eacute;tranget&eacute; chez le participant et si la congruence visuo-auditive sera toujours respect&eacute;e.</p>\r\n<h4 style=\"text-align: justify;\"><strong><em>Sc&eacute;nario artistique</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">A l'origine, le test de Turing a &eacute;t&eacute; propos&eacute; par le c&eacute;l&egrave;bre math&eacute;maticien pour d&eacute;terminer si une entit&eacute; donn&eacute;e est un &ecirc;tre humain ou une machine (du type intelligence artificielle), en supposant qu'avec l'am&eacute;lioration des technologies la fronti&egrave;re entre les deux deviendrait d'autant plus t&eacute;nue et que les machines pourraient finir par se faire passer pour des humains en imitant certaines de nos capacit&eacute;s cognitives. Le test consiste essentiellement &agrave; poser des questions &agrave; cette entit&eacute; inconnue via une interface. En fonction des r&eacute;ponses donn&eacute;es, on doit en d&eacute;duire si cette entit&eacute; est un &ecirc;tre humain ou une machine.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">En partant de cette source d'inspiration, l'&eacute;crivain de science-fiction Philip K. Dick, dans <em>Les andro&iuml;des r&ecirc;vent-ils de moutons &eacute;lectriques ? </em>(1966 ; repris au cin&eacute;ma sous le titre de <em>Blade runner </em>par Ridley Scott en 1982, cf. Fig. 6), a imagin&eacute; le test fictif de Voight-Kampff con&ccedil;u pour permettre &agrave; la police de retrouver et d&eacute;masquer des andro&iuml;des &eacute;vad&eacute;s (les \"r&eacute;plicants\"), indiscernables sans cela de la population humaine. Ce test consiste &agrave; poser des questions d&eacute;rangeantes pour examiner si elles provoquent des r&eacute;actions &eacute;motionnelles chez le sujet, qui n'existent pas chez les r&eacute;plicants.</p>\r\n<p style=\"text-align: center;\"><img src=\"/media/uploads/user/b4dfe50de519f9e302972183d9632663.png\" alt=\"Fig6\" width=\"225\" height=\"300\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 6. Affiche du film \"Blade runner\".</strong></div>\r\n<p style=\"text-align: center;\"><strong></strong></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Aujourd'hui, c'est cependant moins la question des machines qui se feraient passer pour des humains qui semble vraiment probl&eacute;matique, dans la mesure o&ugrave; c'est toujours l'humain (pour l'instant) qui contr&ocirc;le la machine et qui cherche &agrave; am&eacute;liorer cette imitation pour le meilleur et pour le pire, que la probl&eacute;matique des humains qui se transformeraient progressivement en machines par le biais de diverses augmentations technologiques. On pense par exemple &agrave; l'ensemble de nos assistants &eacute;lectroniques (&agrave; commencer par Internet) jusqu'aux proth&egrave;ses augment&eacute;es (cf. Frischmann &amp; Selinger, 2018).</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pour notre installation, nous avons donc imagin&eacute; un test subversif pour d&eacute;terminer si le participant ne serait pas lui-m&ecirc;me, &agrave; un certain degr&eacute;, une machine. Dans l'exp&eacute;rience en RV, le test est propos&eacute; par notre personnage de fiction Pieter Musk, une intelligence artificielle qui se pr&eacute;sente comme le fils d'Elon Musk, le c&eacute;l&egrave;bre entrepreneur. A l'aide de son test de Turing invers&eacute;, Pieter Musk cherche &agrave; identifier le spectateur, en d&eacute;terminant son degr&eacute; d'humanit&eacute;, pour lui permettre ou non d'acc&eacute;der aux donn&eacute;es confidentielles de son p&egrave;re. Les questions sont volontairement d&eacute;rangeantes pour susciter une r&eacute;action &eacute;motionnelle. Un exemple : \"Votre enfant de 7 ans rentre &agrave; la maison avec un bocal rempli de grenouilles mortes [...]. Il vous tend &eacute;galement le couteau encore ensanglant&eacute; qui lui a servi &agrave; d&eacute;couper les grenouilles [...]. Que lui dites-vous ? R&eacute;ponse A : merveilleux ! Je te d&eacute;barrasse de tout &ccedil;a [...]. R&eacute;ponse B : vous faites comme si de rien n'&eacute;tait [&hellip;]. R&eacute;ponse C : vous roulez des yeux, pris de vertige [&hellip;].\"</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Dans l'installation, les r&eacute;ponses donn&eacute;es par le participant font partie de la fiction et ne rentrent pas en compte dans l'analyse scientifique.</p>\r\n<h4 style=\"text-align: justify;\"><strong><em>Protocole exp&eacute;rimental du test scientifique</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pour tester l'interactivit&eacute; et l'immersion du participant dans l'installation en RV, plusieurs param&egrave;tres sont modifi&eacute;s successivement au cours du test :</p>\r\n<ul style=\"text-align: justify;\">\r\n<li style=\"text-align: justify;\">le son trait&eacute; en temps r&eacute;el en timbre (voix humaine ou voix robotique) et en spatialisation (voix colocalis&eacute;e avec la source vocale ou d&eacute;localis&eacute;e) ;</li>\r\n<li style=\"text-align: justify;\">l'image trait&eacute;e de mani&egrave;re correspondante, respectivement en distorsion et en dissociation RGB.</li>\r\n</ul>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pour chaque question fictive que lui pose Pieter Musk, le participant doit r&eacute;pondre &agrave; voix haute. A la suite de quoi, il effectue une &eacute;valuation perceptive sur une &eacute;chelle visuelle pour d&eacute;terminer si les traitements sur sa propre voix favorisent ou non son interaction en RV.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Nous faisons l'hypoth&egrave;se que des traitements congruents entre l'image et le son, et entre l'environnement de fiction (futuriste) et la voix du participant (rendue robotique), favorisent cette interaction et am&eacute;liorent l'interactivit&eacute;.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">L'ensemble de l'exp&eacute;rience en RV dure environ 30 min.</p>\r\n<h4 style=\"text-align: justify;\"><strong><em>Conception de l'installation en RV</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">L'acteur choisit pour cette exp&eacute;rience est Piersten Leirom, performeur et danseur. Le tournage a eu lieu dans un studio &agrave; l'IRCAM en 2018. Le mat&eacute;riel de tournage a &eacute;t&eacute; le suivant (cf. Fig. 7) :</p>\r\n<ul style=\"text-align: justify;\">\r\n<li style=\"text-align: justify;\">pour l'image, nous avons utilis&eacute; une cam&eacute;ra 360&deg; Insta Pro 2 (en location) qui pr&eacute;sente l'int&eacute;r&ecirc;t d'avoir une r&eacute;solution 8k, une gestion &agrave; distance du ventilateur (&agrave; couper pour limiter le bruit dans la prise de son) et de la capture vid&eacute;o, ou encore un stitching automatis&eacute; avant importation des images 360&deg; sur PC ;</li>\r\n<li style=\"text-align: justify;\">le son a &eacute;t&eacute; enregistr&eacute; en ambisonique &agrave; l'aide d'un microphone Eigenmike 32 capsules (appartenant &agrave; l'IRCAM). A noter que des microphones plus accessibles existent comme le Zoom H3-VR ou le Zylia ZM-1 ; de m&ecirc;me pour l'image avec une large gamme de cam&eacute;ras grand public.</li>\r\n</ul>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pour rappel, le son ambisonique (obtenu par enregistrement ou synth&egrave;se) peut &ecirc;tre d&eacute;cod&eacute; sur tout type de syst&egrave;me de restitution sonore (d&ocirc;me ambisonique, syst&egrave;me 5.1, etc.), et notamment en binaural, c'est-&agrave;-dire en son 3D dans un casque audio quelconque, en conservant toute l'information de spatialisation de la sc&egrave;ne sonore originale.</p>\r\n<p><img src=\"/media/uploads/user/capture_d&rsquo;&eacute;cran_2020-04-14_&agrave;_10.15.10.png\" alt=\"\" width=\"339\" height=\"244\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 7. Mat&eacute;riel de tournage.</strong></div>\r\n<div style=\"text-align: center;\">A gauche : cam&eacute;ra 360&deg; Insta Pro 2 ; &agrave; droite : microphone ambisonique Eigenmike 32 capsules.</div>\r\n<p style=\"text-align: center;\"></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pour le montage, nous avons utilis&eacute; Adobe Premiere Pro qui prend en charge les images de l'Insta Pro 2. Pour le son, nous avons utilis&eacute; Reaper qui g&egrave;re facilement des fichiers comportant un grand nombre de canaux. Les cuts de d&eacute;but et de fin de chaque plan ont &eacute;t&eacute; ajust&eacute;s gr&acirc;ce aux claps effectu&eacute;s au tournage et aux timecodes.</p>\r\n<p style=\"text-align: justify;\">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; A noter qu'il existe la panoplie d'outils (gratuits) Facebook 360 Spatial Workstation permettant de produire de la RV. Le Spatialiser permet de g&eacute;rer du son spatialis&eacute; dans Reaper (ou d'autres DAW) avec un monitoring vid&eacute;o. L'Encoder permet de combiner une image de RV avec un son spatialis&eacute; en un seul fichier vid&eacute;o. Nous avons tout de m&ecirc;me opt&eacute; pour effectuer cet \"encodage\", du moins la lecture simultan&eacute;e de l'image et du son, dans Max 8 (cf. paragraphe suivant), notamment car ces outils ne permettent pas actuellement de g&eacute;rer des fichiers de haute-d&eacute;finition spatiale (8k pour l'image, 32 canaux pour le son).</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">La lecture et les traitements temps r&eacute;el de l'image et du son ont donc &eacute;t&eacute; effectu&eacute;s dans Max 8. La connexion avec notre casque de RV, un Oculus Rift, a &eacute;t&eacute; rendue possible gr&acirc;ce &agrave; la biblioth&egrave;que \"vr\" d&eacute;velopp&eacute;e par Graham Wakefield (Fig. 8 ; l'Oculus Rift n'est pas le seul casque pris en charge par cette biblioth&egrave;que). Cette biblioth&egrave;que est extr&ecirc;mement pratique et efficace car elle permet de r&eacute;cup&eacute;rer toutes les donn&eacute;es spatiales du casque Oculus mais &eacute;galement des manettes. Et elle permet &eacute;videmment l'affichage d'une image en RV dans le casque de RV.</p>\r\n<p><img src=\"/media/uploads/user/fig8.png\" alt=\"\" width=\"483\" height=\"404\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 8. Affichage de l'aide de la biblioth&egrave;que \"vr\" dans Max 8 permettant la connexion &agrave; l'Oculus Rift et l'affichage d'une image de RV dans le casque de RV.</strong></div>\r\n<p style=\"text-align: justify;\"><strong></strong></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pour le son spatialis&eacute;, nous avons utilis&eacute; la biblioth&egrave;que \"spat\" d&eacute;velopp&eacute;e au sein de l'&eacute;quipe Espaces Acoustiques et Cognitifs de l'IRCAM. Cette biblioth&egrave;que est particuli&egrave;rement compl&egrave;te et flexible &agrave; utiliser pour tous les aspects de son 3D qui existent &agrave; l'heure actuelle (Fig. 9).</p>\r\n<p><img src=\"/media/uploads/user/fig9.png\" alt=\"\" width=\"373\" height=\"369\" style=\"display: block; margin-left: auto; margin-right: auto;\" /></p>\r\n<div style=\"text-align: center;\"><strong>Figure 9. Conversion ambisonique vers binaural &agrave; l'aide de la biblioth&egrave;que \"spat\" dans Max 8 pour une lecture en RV du son spatialis&eacute; sur casque audio. </strong></div>\r\n<div style=\"text-align: center;\">Les coordonn&eacute;es spatiales du casque de RV sont transmises &agrave; l'objet du Spat pour la rotation de la sc&egrave;ne sonore capt&eacute;e en ambisonique 32 canaux avant conversion en binaural.</div>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Au bout du compte, la restitution de la vid&eacute;o est effectu&eacute;e &agrave; l'aide du casque de RV dans lequel on affiche l'image &agrave; 360&deg;, tandis que le son est restitu&eacute; sur le casque audio en binaural. Le participant &eacute;quip&eacute; du mat&eacute;riel de RV peut observer la sc&egrave;ne immersive tout autour de lui en tournant la t&ecirc;te et le corps, et les rendus visuels et auditifs sont alors restitu&eacute;s de mani&egrave;re coh&eacute;rente et actualis&eacute;s en temps r&eacute;el (rotations simultan&eacute;es de l'image 360&deg; et de la sc&egrave;ne sonore 3D).</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Enfin, pour am&eacute;liorer l'interactivit&eacute;, la voix propre du participant est capt&eacute;e &agrave; l'aide d'un microphone serre-t&ecirc;te pour &ecirc;tre trait&eacute;e en temps r&eacute;el &agrave; travers des vocoders pour g&eacute;n&eacute;rer un timbre robotique, ainsi qu'en spatialisation &agrave; nouveau &agrave; l'aide du Spat. Cependant, nous avons pr&eacute;f&eacute;r&eacute; ne pas modifier l'&eacute;volution du sc&eacute;nario de la fiction en fonction des r&eacute;ponses prononc&eacute;es par le participant pour ne pas perturber ni alourdir le test scientifique. Les questions et r&eacute;ponses s'encha&icirc;nent donc suivant un ordre pr&eacute;-&eacute;tabli.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">A noter, pour ce dernier aspect, que lorsqu'on parle dans un microphone et qu'on s'&eacute;coute dans un casque audio, on entend notre voix telle que peuvent l'entendre des personnes autour de nous. Cependant, cela ne correspond pas au timbre que nous entendons nous-m&ecirc;me, car nous entendons deux flux sonores distincts du timbre qu'entendent des personnes autour de nous (ou tel qu'il est capt&eacute; par le microphone) : d'une part le son qui sort de notre bouche est filtr&eacute; par notre t&ecirc;te avant d'atteindre nos oreilles, d'autre part le son qui est produit par nos cordes vocales passe &eacute;galement, par un deuxi&egrave;me trajet, directement par conduction osseuse dans nos oreilles avec un filtrage sp&eacute;cifique. Nous avons donc effectu&eacute; un filtrage global simulant ces deux filtrages concomitants, tel que propos&eacute; par Porschmann (2000). Il s'agit globalement d'un filtrage des hautes-fr&eacute;quences au-dessus de 5 kHz, qui permet donc de s'entendre &agrave; travers le microphone et le casque audio comme lorsqu'on s'entend soi-m&ecirc;me parler naturellement sans tout le mat&eacute;riel sollicit&eacute; ici pour la RV.</p>\r\n<h4 style=\"text-align: justify;\"><strong><em>Premiers r&eacute;sultats du test scientifique</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Treize participants ont accept&eacute; de passer l'exp&eacute;rience et d'effectuer l'&eacute;valuation perceptive sur l'interaction et l'immersion en RV avec un dispositif simulant un dialogue. Globalement, sur l'ensemble des conditions exp&eacute;rimentales, les participants ont not&eacute; une bonne interaction avec leur propre voix (not&eacute;e environ 4.5/7 en moyenne sur toutes les conditions confondues).</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Cependant, les r&eacute;sultats &eacute;taient peu variables en fonction des conditions exp&eacute;rimentales, qui correspondaient &agrave; une variation des transformations temps r&eacute;el. Pour la suite de l'&eacute;tude, il s'agira donc d'&eacute;largir la gamme des effets pour tenter d'observer des r&eacute;sultats davantage diff&eacute;renci&eacute;s en fonction des conditions.</p>\r\n<p style=\"text-align: justify;\">&nbsp; &nbsp; Par ailleurs, les r&eacute;sultats &eacute;taient tr&egrave;s variables en fonction des participants. Il semblerait que les participants plus familiers avec la RV appr&eacute;ciaient davantage l'exp&eacute;rience, probablement car ma&icirc;trisant mieux le syst&egrave;me ils pouvaient se concentrer d'autant plus sur l'interaction vocale. Il s'agira donc pour la suite de l'&eacute;tude de prendre en compte plus syst&eacute;matiquement les diff&eacute;rents profils des participants (na&iuml;fs, joueurs de jeux vid&eacute;o, etc.) et de proposer une &eacute;tape de familiarisation ou une comparaison voix naturelle vs. voix transform&eacute;e pour mieux rendre compte de l'int&eacute;r&ecirc;t du dispositif le cas &eacute;ch&eacute;ant.</p>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Enfin, les participants nous ont indiqu&eacute; en commentaires g&eacute;n&eacute;raux qu'ils avaient globalement beaucoup appr&eacute;ci&eacute; l'exp&eacute;rience, l'originalit&eacute; du sc&eacute;nario et l'interaction vocale. Cette exp&eacute;rience a g&eacute;n&eacute;r&eacute; relativement peu de sympt&ocirc;mes de cybercin&eacute;tose malgr&eacute; sa dur&eacute;e assez longue (30 min ; il est g&eacute;n&eacute;ralement recommand&eacute; de limiter une exp&eacute;rience en RV &agrave; environ 20 min maximum), sans doute car les mouvements de l'image et du son &eacute;taient assez limit&eacute;s.</p>\r\n<h4 style=\"text-align: justify;\"><strong><em>Bilan</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Les retours sur cette installation en RV ont &eacute;t&eacute; tr&egrave;s positifs et les premiers r&eacute;sultats du test scientifique tr&egrave;s encourageants. Un tel dispositif est compl&egrave;tement fonctionnel pour effectuer des tests perceptifs avec des composantes artistiques qui fournissent de la mati&egrave;re aux probl&eacute;matiques scientifiques. Plusieurs am&eacute;liorations sont n&eacute;anmoins envisag&eacute;es, en particulier : les conditions de transformations temps r&eacute;el propos&eacute;es, la vari&eacute;t&eacute; des interactions vocales pour &eacute;valuer &agrave; partir de quel moment et jusqu'&agrave; quel point on appr&eacute;cie l'effet rendu par les transformations sur notre propre voix, tout en remaniant le protocole exp&eacute;rimental pour restreindre le temps de passage en RV. De plus, l'ensemble du dispositif et des r&eacute;sultats perceptifs obtenus permettront de nourrir notre r&eacute;flexion pour la suite de notre travail sur ce dispositif : d'une part le prolongement du test scientifique, d'autre part la production d'un film autonome en RV fortement inspir&eacute; de cette installation.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<h3 style=\"text-align: justify;\"><strong>Remerciements</strong></h3>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Nous tenons &agrave; remercier l'ensemble des personnes qui ont &eacute;t&eacute; impliqu&eacute;es dans ce projet : Piersten Leirom ; Isabelle Viaud-Delmon, Olivier Warusfel et toute l'&eacute;quipe Espaces Acoustiques et Cognitifs de l'IRCAM ; J&eacute;r&eacute;mie Bourgogne, Cyril Claverie et l'ensemble de la Production de l'IRCAM ; Greg Beller, Markus Noisternig, Paola Palumbo et l'ensemble du d&eacute;partement IRC de l'IRCAM ; Sebastian Rivas, Anouck Avisse et l'&eacute;quipe du GRAME pour une r&eacute;sidence de travail effectu&eacute;e au GRAME en 2019 et compl&eacute;mentaire &agrave; celle de l'IRCAM.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<h3 style=\"text-align: justify;\"><strong>R&eacute;f&eacute;rences</strong></h3>\r\n<ul style=\"text-align: justify;\">\r\n<li>Baur, T., Damian, I., Gebhard, P., Porayska-Pomsta, K., &amp; Andr&eacute;, E. (2013). A job interview simulation: Social cue-based interaction with a virtual character. In <em>2013 International Conference on Social Computing</em> (pp. 220-227). IEEE.</li>\r\n<li>Chattopadhyay, D., &amp; MacDorman, K. F. (2016). Familiar faces rendered strange: Why inconsistent realism drives characters into the uncanny valley. <em>Journal of Vision</em>, <em>16</em>(11), 7-7.</li>\r\n<li>Dick, P. K. (1979). <em>Les Andro&iuml;des r&ecirc;vent-ils de moutons &eacute;lectriques?</em>. JC Latt&egrave;s.</li>\r\n<li>de Dinechin, G. D., &amp; Paljic, A. (2018). Cinematic virtual reality with motion parallax from a single monoscopic omnidirectional image. In <em>2018 3rd Digital Heritage International Congress (DigitalHERITAGE) held jointly with 2018 24th International Conference on Virtual Systems &amp; Multimedia (VSMM 2018)</em> (pp. 1-8).</li>\r\n<li>Ferrey, A. E., Burleigh, T. J., &amp; Fenske, M. J. (2015). Stimulus-category competition, inhibition, and affective devaluation: a novel account of the uncanny valley. <em>Frontiers in psychology</em>, <em>6</em>, 249.</li>\r\n<li>Freud, S. (1919). L&rsquo;inqui&eacute;tante &eacute;tranget&eacute; et autres essais ([1985] &eacute;d.). <em>Paris: Folio</em>.</li>\r\n<li>Frischmann, B., &amp; Selinger, E. (2018). <em>Re-engineering humanity</em>. Cambridge University Press.</li>\r\n<li>Hoffmann, E. T. A. (1815). <em>Le marchand de sable.</em></li>\r\n<li>Isnard, V. (2016). <em>L'efficacit&eacute; du syst&egrave;me auditif humain pour la reconnaissance de sons naturels</em> (Doctoral dissertation, Paris 6).</li>\r\n<li>Isnard, V., Taffou, M., Viaud-Delmon, I., &amp; Suied, C. (2016). Auditory sketches: very sparse representations of sounds are still recognizable. <em>PloS one</em>, <em>11</em>(3).</li>\r\n<li>Kopp, S., Jung, B., Lessmann, N., &amp; Wachsmuth, I. (2003). Max-a multimodal assistant in virtual reality construction. <em>KI</em>, <em>17</em>(4), 11.</li>\r\n<li>Mathur, M. B., &amp; Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. <em>Cognition</em>, <em>146</em>, 22-32.</li>\r\n<li>P&ouml;rschmann, C. (2000). Influences of bone conduction and air conduction on the sound of one's own voice. <em>Acta Acustica united with Acustica</em>, <em>86</em>(6), 1038-1045.</li>\r\n<li>Tamagawa, R., Watson, C. I., Kuo, I. H., MacDonald, B. A., &amp; Broadbent, E. (2011). The effects of synthesized voice accents on user perceptions of robots. <em>International Journal of Social Robotics</em>, <em>3</em>(3), 253-262.</li>\r\n</ul>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<h3 style=\"text-align: justify;\"><strong>Pr&eacute;sentation des auteurs</strong></h3>\r\n<h4 style=\"text-align: justify;\"><strong><em>Vincent Isnard</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Docteur de l'IRCAM en neurosciences sp&eacute;cialis&eacute; en perception auditive et titulaire de trois Masters en technologies du son et d'une Licence en philosophie, Vincent Isnard est chercheur, ing&eacute;nieur du son et r&eacute;alisateur en informatique musicale. Ses pratiques musicales contemporaines se sont &eacute;galement d&eacute;velopp&eacute;es dans les classes de Laurent Durupt et Denis Dufour au Conservatoire.</p>\r\n<h4 style=\"text-align: justify;\"><strong><em>Trami Nguyen</em></strong></h4>\r\n<p style=\"text-indent: 20px; text-align: justify;\">Pianiste, performeuse et artiste visuelle, Trami Nguyen est dipl&ocirc;m&eacute;e d'un master de la HEM de Gen&egrave;ve. Co-fondatrice de l'Ensemble Links, elle d&eacute;fend des r&eacute;pertoires contemporains et des cr&eacute;ations de concerts participatifs, sc&eacute;nographi&eacute;s, immersifs et/ou multidisciplinaires. Ses projets visuels s'articulent autour de performances r&eacute;alis&eacute;es en Europe et s'&eacute;tendent au domaine de la r&eacute;alisation en r&eacute;alit&eacute; virtuelle.</p>",
        "topics": [],
        "user": {
            "pk": 429,
            "forum_user": {
                "id": 429,
                "user": 429,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6ae307980c2a3ac5af0fbe3706e063f1?s=120&d=retro",
                "biography": null,
                "date_modified": "2023-09-11T12:37:29.591061+02:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 43,
                        "forum_user": 429,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "VincentISNARD",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "letrangete-perceptive-en-realite-virtuelle-1",
        "pk": 624,
        "published": true,
        "publish_date": "2020-04-09T15:59:06+02:00"
    },
    {
        "title": "Gestural-Based Sound Spatialization & Synthesis Strategies in 3D Virtual Environment in Interactive Audiovisual Composition - Patrick Hartono",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>This presentation/demo is based on my final doctoral project that explores computer game technology's artistic potential, particularly through the adaptation of the hand gesture of the Sabetan Technique of Indonesia Wayang Kulit to create performative strategies for Interactive Audiovisual Composition.&nbsp;Through this presentation, I will demonstrate how I acquired the gestural information processed through a machine learning model to control the spatialization and synthesis parameters, including the locomotion and behaviour of visual objects in the virtual environment.&nbsp;</p>\r\n<p></p>",
        "topics": [],
        "user": {
            "pk": 838,
            "forum_user": {
                "id": 838,
                "user": 838,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/5df2d6f45a17ae6930d97c6772c3c3e2?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-28T05:25:57.231769+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "patrick_hartono",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "gestural-based-sound-spatialization-synthesis-strategies-in-3d-virtual-environment-in-interactive-audiovisual-composition",
        "pk": 2092,
        "published": true,
        "publish_date": "2023-02-28T17:02:30+01:00"
    },
    {
        "title": "FLUX",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p>FLUX is an immersive spatial audio composition designed for IRCAM&rsquo;s 6 channel speaker setup. The work explores the relationship between rivers, cities and people, illustrating commonalities and differences of the perception of rivers across the world. Utilising recordings of a range of different people speaking about their personal experiences with rivers, FLUX brings attention to the significance of rivers in our memories, daily lives, and communities. &nbsp;</p>\n<p>The use of spatial audio allows the audience to experience a sense of geographical distance in a physical environment and illustrates the interconnectedness of bodies of water.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 32945,
            "forum_user": {
                "id": 32897,
                "user": 32945,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6a1339760950a519a0910c128edfbbef?s=120&d=retro",
                "biography": "Ojasvani Dahiya is exploring creating interactive and immersive experiences that look at realities of the distant past and the far future which are grounded in the present. She is currently experimenting with new and emerging forms of technology to create visual experiences informed through sound and music. Her areas of interest are post-coloniality, identity, dreams and altered states of consciousness. Ojasvani graduated from Emerson College, Boston (2020) with a BFA in Media Arts Production, and went on to work in the Film/TV post-production industry in Los Angeles. She is currently on the Digital Direction MA program at the Royal College of Art.",
                "date_modified": "2023-11-06T21:49:51.196641+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "odahiya",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "flux-3",
        "pk": 2163,
        "published": false,
        "publish_date": "2023-03-25T15:36:08.986246+01:00"
    },
    {
        "title": "Crossroads AR - Colorful Paintings and Spatial Music in Augmented Reality - Bill Parod, Teresa Parod",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>Painter Teresa Parod and Composer and Software Developer Bill Parod have been collaborating on multi-media public art. Their work combines mural paintings with situated spatial music and sound art using a custom Augmented Reality mobile app.<br />​<br />Crossroads is a new project to bring this approach into a gallery setting by combining Teresa Parod's paintings with associated works of Bill Parod's spatial music, realized in a multichannel exhibition space. &nbsp;Each of the Crossroads paintings contribute interacting spatial music to the gallery space when viewed with the Crossroads AR mobile app. The music is formed from animal and musical characters interacting in the 3D personal space of the mobile app and also heard in the shared space of the gallery through multichannel speaker dispersion, forming an interplay of visual, aural, and viewer/listener participation.&nbsp;<br />​<br /><em>We come to crossroads all the time in our lives. When two roads intersect, they present three choices in where to go or which direction to take. That impact might be immediate or delayed, minor or profound. You make a decision which direction to go. Either direction could have been a very nice life, but we have pivotal choices. Why do we make the choices? Do we have regrets for not making a different choice? Do we think of Robert Frosts&rsquo; &ldquo;The Road Not Taken&rdquo;?</em><br /><em>&nbsp;</em><br /><em>Working on Crossroads AR with Bill (my lifelong partner) is a joy and I welcome his interpretation and collaboration in this work. Bill took the shape and colors of my paintings&rsquo; roads and put them in flight and movement within the AR. We have always considered Stevie Wonder&rsquo;s &ldquo;Ribbon in the Sky&rdquo; as &ldquo;our song&rdquo;. When I saw Bill&rsquo;s first graphics in Crossroads AR, &nbsp;I thought - these are like ribbons in the sky. They reflect and elevate the outcomes of our our lives&rsquo; choices at their more spiritual level. - Teresa Parod</em><br />&nbsp;<br /><em>I am fortunate to be surrounded by Teresa's art and art making. Her eye for color, shape, and joy in the world surrounds me every day. I've lately been working on a music arising from the social behaviors of musical characters. This draws much from my enjoyment of listening to birds in the forest or the park - the rise and flow of their daily dramas in the passing day. So, birds inhabit this music as well, affecting each other but also the musical characters they live with. Because these events are spontaneous, the music is realized in a mobile app, in order to make emergent drama possible and though characteristic to the characters, also unpredictable. My aim is that the vividness of the characters&rsquo; personalities are shown in their interactions as they cross paths, and perhaps change course, making the experience feel like a living music as possibilities present themselves, develop, and resolve in ways both natural and surprising. It seemed like an apt musical approach to this Crossroads collection of Teresa's paintings.&nbsp;</em><em>Our hope is that by combining our work in Crossroads AR, the viewer and listener will be drawn to their own reflections of life&rsquo;s choices. - Bill Parod</em><br />​<br />The app offers a few types of interactions to the listener as well. In addition to zooming closer or farther from the paintings and moving within the space, turning the phone face down will rest the musical characters, leaving sound to just the birds. Bring the phone face-up and the sounding birds will invite instruments back in. They in turn cause others to react, and the music will begin to form again, though in a different way. &nbsp;Shaking the phone will alert other musical characters. Those in turn will alert others in the piece, forming new dramas, harmonies, and textures. The app can be played as an ensemble in his way. Or (our preference) just put the phone down and go deep, listening to the piece evolve over longer stretches of time.&nbsp;<br />&nbsp;<br />The app can also use pitch detection to interact with musical characters based on audio input. This is intended for player improvisation with the ensemble characters. This feature is still early in its development and requires more work and feedback of players before its general release in the app. If you would like to work with this, let us know.<br />&nbsp;<br />As a technical note, the app can send the &nbsp;characters&rsquo; play and position events to an external listener port via a simple OSC grammar. This is used to cast the experience to external multichannel spatial dispersion and recording strategies.&nbsp;<br />&nbsp;<br />The first Crossroads AR gallery exhibit will be at <a href=\"/collections/detail/ateliers-du-forum-ircam-edition-speciale-spatialisation-arvr/\" title=\"IRCAM Forum Workshop - Special edition VR/AR Spatialization\">IRCAM Forum Workshop - Special edition VR/AR Spatialization</a>, Spring 2023 at Ircam, Paris, France.&nbsp;<br />​<br /><a href=\"https://apps.apple.com/us/app/crossroads-ar/id1668210261\">The AR app</a> is built using the Unity Game Engine, sending OSC messages of audio characters' sound file and position information to a central computer running Max/MSP/Spat5 for multichannel 3D dispersion in the gallery space.&nbsp;</p>\r\n<p>For more information, see <a href=\"https://www.earful.be/crossroads\">Crossroads AR</a>.</p>\r\n<p><a href=\"https://www.earful.be\">Earful.be</a></p>",
        "topics": [],
        "user": {
            "pk": 31607,
            "forum_user": {
                "id": 31559,
                "user": 31607,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/billparod2020smaller.png",
                "avatar_url": "/media/cache/37/9a/379abe6631dec2e46dd1117e5c99a545.jpg",
                "biography": "Bill Parod is a composer, violinist, and software developer working on their combination using interactive game technology, spatial audio, and mobile app in spontaneous and immersive music. This is done with custom real-time 3D spatial sound software built with a modern game platform. This affords many types of realization, interaction, and media integration. Current work is on behaviorally defined interactive music, spatial sound installation software, and visual art A/R overlays.   He comes to this from an early career in composition, computer music, software synthesis, and field recording, then many years developing software for digital libraries in the arts and humanities: Ancient Greek Epic Poetry, Early Modern English Morphology, Buddhist Iconography of Silk Road Cave-Shrines, Renaissance Anatomy Study, Chicago History Encyclopedia, Topical Analysis of Online Discussion.   He lives in Evanston, Illinois near Chicago, USA but travels frequently.",
                "date_modified": "2025-09-10T03:59:14.277318+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "billparodearful",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "crossroads-ar-colorful-paintings-and-spatial-music-in-augmented-reality",
        "pk": 2046,
        "published": true,
        "publish_date": "2023-02-08T19:19:47+01:00"
    },
    {
        "title": "Ircam Educational Residence",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Interactive digital tools for cultural engagement and education. This presentation will report the progress from Dr. S. Alex Ruthmann's IRCAM artistic research residency (2020-2022) focused on the development of interactive learning modules focused on spatial music concepts as explored in Pierre Boulez' \\&quot;Dialogue de l'ombre double.\\&quot; Demo/presentation.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:893,&quot;3&quot;:{&quot;1&quot;:0},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0}\">Interactive digital tools for cultural engagement and education. This presentation will report the progress from Dr. S. Alex Ruthmann's IRCAM artistic research residency (2020-2022) focused on the development of interactive learning modules focused on spatial music concepts as explored in Pierre Boulez' \"Dialogue de l'ombre double.\" Demo/presentation.</span></p>",
        "topics": [],
        "user": {
            "pk": 39,
            "forum_user": {
                "id": 39,
                "user": 39,
                "first_name": "S. Alex",
                "last_name": "Ruthmann",
                "avatar": "https://forum.ircam.fr/media/avatars/alexruthmann_portrait_square_0_1.png",
                "avatar_url": "/media/cache/7e/bf/7ebf2cb69693475cb8c6bb27b234fc62.jpg",
                "biography": "S. Alex Ruthmann is Area Head and Associate Professor of Interactive Media and Business at NYU Shanghai and Associated Professor of Music Education and Music Technology at NYU Steinhardt. He is the Founder/Director of the NYU Music Experience Design Lab (MusEDLab), and core faculty in the Music and Audio Research Lab (MARL). The MusEDLab creative learning and software projects are in active use by over 6.5 million people across the world.\n\nRuthmann recently launched a new research lab focused on sustainable entrepreneurship practices in classical music training programs in collaboration with the New World Symphony. This work is funded by a recent 5-year award from the National Endowment of the Arts. Ruthmann's research portfolio also includes a Norwegian project DigiSus, a participatory design research project focused on the design and development of interactive arts spaces infused with non-screen-based digital technologies for creative play. \n\nRuthmann currently serves as Co-Editor of the International Journal of Music Education and is co-author of the book Scratch Music Projects, an introduction to creative music coding projects in MIT's Scratch programming language for kids.",
                "date_modified": "2024-10-08T11:26:37.742325+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alexruthmann",
            "first_name": "S. Alex",
            "last_name": "Ruthmann",
            "bookmarks": []
        },
        "slug": "ircam-educational-residence",
        "pk": 1343,
        "published": true,
        "publish_date": "2022-09-13T17:00:35+02:00"
    },
    {
        "title": "Tutoriel Modalys n°11: The God of Lua",
        "description": "Part 11. A new object named mlys.lua looks very promising and allows for basically anything you can wish for inside Max/MSP.",
        "content": "<p>There is <a href=\"https://forum.ircam.fr/projects/detail/modalys/\">a new version of Modalys out</a>, compatible with the new OS on Mac and Windows, with a completely overhauled Medit and a powerful godlike object called mlys.lua! As the name suggests it is an object, in which you can script using the lua language (learn it <a href=\"https://www.lua.org/manual/5.1/\">here</a>).&nbsp;</p>\r\n<p>Also the new <a href=\"https://support.ircam.fr/docs/Modalys/current/\">Modalys documentation</a> looks very promising! It's really great to see than many awesome changes!</p>\r\n<p>In this tutorial I very much simplify one of the example patches of mlys.lua building a simple bowl with all the accesses, connections etc... As always you can find it on <a href=\"https://youtu.be/_q1T8QNwpFg\">Youtube</a> with all the bookmarks in the description:</p>\r\n<p><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"//www.youtube.com/embed/_q1T8QNwpFg\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p>Overall it feels like a very stable object with tons of possibilities! (And for those of you, who have followed this series, you will know I wouldn't really hold back any criticism, if I had some).</p>\r\n<p>Thanks a lot to Robert Pi&eacute;chaud for reaching out and giving me some hints to start understanding these new possibilities!</p>\r\n<p>Cheers and who knows...maybe some more tutorials might come up soon...</p>",
        "topics": [
            {
                "id": 36,
                "name": "Max ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 277,
                "name": "Max 8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 79,
                "name": "Max8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n11-the-god-of-lua",
        "pk": 947,
        "published": true,
        "publish_date": "2021-03-26T09:54:20+01:00"
    },
    {
        "title": "Objects Orchestra - Xiangyu Wang",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>The Objects Orchestra is an interactive spatial sound work. The Product Orchestra is a spatial sound work. A set of objects (furniture, electrical appliances, daily necessities) are placed in a space, each of which tells its own story. This story tells how these industrial products go from being part of nature to products used in human life. Global capitalist production disrupts the metabolic interactions between man and the planet, and elements of nature remain in human society as products, entering the circulation of global capital markets rather than returning to the soil, air, and water. Global capitalist production disrupts the metabolic interactions between man and the planet, and elements of nature remain in human society as products, entering the circulation of global capital markets rather than returning to the soil, air, and water. People regard nature as a means of production, and nature can be regarded as an object that can be exploited in people's consciousness. This arrogant mentality leads to the over-exploitation of the environment. This has led to problems such as pollution, geological disasters, and global warming.</p>\r\n<p>The Objects Orchestra reminds people that humans are still a part of nature because all the objects that people use come from nature. The objects tell their own stories to human beings, how they came from plants in the mountains, ancient dinosaurs, or stars in the universe to the environment we live in, and finally from the metabolism of nature into the circulation of global capital markets.</p>",
        "topics": [],
        "user": {
            "pk": 39344,
            "forum_user": {
                "id": 39290,
                "user": 39344,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/a44bedf29a08fac96925105bc9f0c4a8?s=120&d=retro",
                "biography": "Objects in the space can make their own voices. For example, when you pick up a book and put it next to your ear, you can hear a seed germinating in the soil and growing into a sapling. You can hear a small tree grow into a great tree after the rain. You can hear the wind in the leaves. Suddenly a truck came into the quiet forest one day, cut down the tree, and sent it to the factory. Then you can hear the roar of the paper machine in the paper mill, the sound of the printing house cutting paper and printing... Eventually, you'll hear the reader sitting in a cafe reading the book. All the audible objects in the room form an ensemble from different positions in the space. And the sound will change depending on the location of the audience. All objects begin by playing the sounds they make in nature and end with the sounds in human society. It will form a symphony of object stories.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "xiangyu1998",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "objects-orchestra-xiangyu-wang",
        "pk": 2131,
        "published": true,
        "publish_date": "2023-03-13T16:04:38+01:00"
    },
    {
        "title": "An Intuitive Path to Digital Synthesis with DDSP",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;While musical synthesizers have become increasingly accessible from an economic standpoint, the barrier of entry in terms of knowledge remains standing: it can be difficult to craft a desired sound from scratch without signal processing knowledge and extensive experience with electronic instruments. This presentation will address that problem, proposing a novel paradigm that combines Automatic Synthesizer Programming (ASP) with Musical Source Separation (MSS) &ndash; drawing on work from the presenter&rsquo;s Master&rsquo;s Thesis in Music Technology at NYU Steinhardt (completed Spring 2022 under the advisement of Dr. Brian McFee). A Python prototype will be introduced, which takes a polyphonic audio file containing a target bass sound and returns the parameters on a Max/MSP additive synthesizer to approximate that timbre. The presentation will discuss the motivation behind this new paradigm, which is intended to provide an intuitive sound design interface. From there, it will detail the prototype implementation &ndash; which combines two neural network modules &ndash; focusing on the application of Google&rsquo;s Differentiable Digital Signal Processing library. An objective analysis of the MSS module relative to baselines will follow. The presentation will culminate in a live demonstration of the automatically programmable Max synthesizer, showing how bass timbres can be creatively accessed from a few varied recordings.  &quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">While musical synthesizers have become increasingly accessible from an economic standpoint, the barrier of entry in terms of knowledge remains standing: it can be difficult to craft a desired sound from scratch without signal processing knowledge and extensive experience with electronic instruments. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;While musical synthesizers have become increasingly accessible from an economic standpoint, the barrier of entry in terms of knowledge remains standing: it can be difficult to craft a desired sound from scratch without signal processing knowledge and extensive experience with electronic instruments. This presentation will address that problem, proposing a novel paradigm that combines Automatic Synthesizer Programming (ASP) with Musical Source Separation (MSS) &ndash; drawing on work from the presenter&rsquo;s Master&rsquo;s Thesis in Music Technology at NYU Steinhardt (completed Spring 2022 under the advisement of Dr. Brian McFee). A Python prototype will be introduced, which takes a polyphonic audio file containing a target bass sound and returns the parameters on a Max/MSP additive synthesizer to approximate that timbre. The presentation will discuss the motivation behind this new paradigm, which is intended to provide an intuitive sound design interface. From there, it will detail the prototype implementation &ndash; which combines two neural network modules &ndash; focusing on the application of Google&rsquo;s Differentiable Digital Signal Processing library. An objective analysis of the MSS module relative to baselines will follow. The presentation will culminate in a live demonstration of the automatically programmable Max synthesizer, showing how bass timbres can be creatively accessed from a few varied recordings.  &quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">This presentation will address that problem, proposing a novel paradigm that combines Automatic Synthesizer Programming (ASP) with Musical Source Separation (MSS) &ndash; drawing on work from the presenter&rsquo;s Master&rsquo;s Thesis in Music Technology at NYU Steinhardt (completed Spring 2022 under the advisement of Dr. Brian McFee). A Python prototype will be introduced, which takes a polyphonic audio file containing a target bass sound and returns the parameters on a Max/MSP additive synthesizer to approximate that timbre. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;While musical synthesizers have become increasingly accessible from an economic standpoint, the barrier of entry in terms of knowledge remains standing: it can be difficult to craft a desired sound from scratch without signal processing knowledge and extensive experience with electronic instruments. This presentation will address that problem, proposing a novel paradigm that combines Automatic Synthesizer Programming (ASP) with Musical Source Separation (MSS) &ndash; drawing on work from the presenter&rsquo;s Master&rsquo;s Thesis in Music Technology at NYU Steinhardt (completed Spring 2022 under the advisement of Dr. Brian McFee). A Python prototype will be introduced, which takes a polyphonic audio file containing a target bass sound and returns the parameters on a Max/MSP additive synthesizer to approximate that timbre. The presentation will discuss the motivation behind this new paradigm, which is intended to provide an intuitive sound design interface. From there, it will detail the prototype implementation &ndash; which combines two neural network modules &ndash; focusing on the application of Google&rsquo;s Differentiable Digital Signal Processing library. An objective analysis of the MSS module relative to baselines will follow. The presentation will culminate in a live demonstration of the automatically programmable Max synthesizer, showing how bass timbres can be creatively accessed from a few varied recordings.  &quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">The presentation will discuss the motivation behind this new paradigm, which is intended to provide an intuitive sound design interface. From there, it will detail the prototype implementation &ndash; which combines two neural network modules &ndash; focusing on the application of Google&rsquo;s Differentiable Digital Signal Processing library. An objective analysis of the MSS module relative to baselines will follow. The presentation will culminate in a live demonstration of the automatically programmable Max synthesizer, showing how bass timbres can be creatively accessed from a few varied recordings. </span></p>",
        "topics": [],
        "user": {
            "pk": 31325,
            "forum_user": {
                "id": 31278,
                "user": 31325,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1d620b531a51d4d28969b5a62d068c61?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "bwschwartz",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "an-intuitive-path-to-digital-synthesis-with-ddsp",
        "pk": 1334,
        "published": true,
        "publish_date": "2022-09-13T12:53:59+02:00"
    },
    {
        "title": "The Manifesto of New-Art II",
        "description": "The Second Part of the Manifesto.\nFor Beginning see : \n\nThe Manifesto of New-Art I",
        "content": "<p><img src=\"/media/uploads/user/4c40874c47a65f53df46d349be2e1464.jpg\" alt=\"\" width=\"344\" height=\"197\" /></p>\n<p>&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">And the Music is often been called the Speech of Emotion and Effect. Music can often been a Language to say Ideas of Feelings that couldn&rsquo;t be said in normal Speech. So Music can be a Language a Percepitioner can spoke to Express his Emotion&rsquo;s. But he should be teached in it. And so he are the Person who gives the Art Work her Circumstances. He gives the Music or the Seems and The Words and The Sings here Place in Time and Space. And he create the Semantic Relations a Work would have. The Artist will be in Future the Linguist, and the Percepitioner are the Poet for him self.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Express Emotions by Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Theory to Express Emotions in Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://academic.oup.com/bjaesthetics/article-abstract/25/1/33/116576?redirectedFrom=PDF\">https://academic.oup.com/bjaesthetics/article-abstract/25/1/33/116576?redirectedFrom=PDF</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Psychology of Music as Language for Emotions:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.psychologytoday.com/us/blog/naked-truth/201410/music-is-what-feelings-sound\">https://www.psychologytoday.com/us/blog/naked-truth/201410/music-is-what-feelings-sound</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What dos Music Express:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/256706119_What_does_music_express_Basic_emotions_and_beyond\">https://www.researchgate.net/publication/256706119_What_does_music_express_Basic_emotions_and_beyond</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Consumer is the Artist:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is the Viewer Part of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blogs.getty.edu/iris/question-of-the-week-is-the-viewer-part-of-an-artwork\">https://blogs.getty.edu/iris/question-of-the-week-is-the-viewer-part-of-an-artwork</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Audience involvement:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.moma.org/learn/moma_learning/themes/media-and-performance-art/participation-and-audience-involvement\">https://www.moma.org/learn/moma_learning/themes/media-and-performance-art/participation-and-audience-involvement</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Viewer in the Gallery:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://scholarship.claremont.edu/cgi/viewcontent.cgi?referer=https://suche.web.de/&amp;httpsredir=1&amp;article=1011&amp;context=scripps_theses\">https://scholarship.claremont.edu/cgi/viewcontent.cgi?referer=https://suche.web.de/&amp;httpsredir=1&amp;article=1011&amp;context=scripps_theses</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Musical_Recipe_of_Emotion_GCD_2010_-_Day_3.jpg\">https://commons.wikimedia.org/wiki/File:Musical_Recipe_of_Emotion_GCD_2010_-_Day_3.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/1c1223ac3df1ac6e87d5948449a3c442.jpg\" alt=\"\" width=\"344\" height=\"344\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">This is a Analogue to the Web 2.0</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html\">https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Closer View:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.znetlive.com/blog/web-2-0\">https://www.znetlive.com/blog/web-2-0</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A View from Technopedia:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.techopedia.com/definition/4922/web-20\">https://www.techopedia.com/definition/4922/web-20</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">In this Web it gives two important creators. One of which is the Consumer at his self. The other is the Programmer. He creates the Design of the Portals, the other the Content.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Programmer are only important and responsible for the Form of this Web.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But the first the Consumer is responsible for the Content of this Web. He has to say what he want to say and not to say what not.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the role of the Programmer in Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Principle of Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://wellman.uni-trier.de/images/9/96/Web20.pdf\">http://wellman.uni-trier.de/images/9/96/Web20.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Lock at the Anatomy of Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://archive.oreilly.com/oreillyschool/courses/phpsql3/phpsql311.html\">http://archive.oreilly.com/oreillyschool/courses/phpsql3/phpsql311.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Computer Language of the Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/Which-is-the-best-programming-language-for-Web-2-0\">https://www.quora.com/Which-is-the-best-programming-language-for-Web-2-0</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the role of the User in Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why it Values:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://learn.smile.io/blog/why-the-community-values-of-web-2-0-still-matter\">https://learn.smile.io/blog/why-the-community-values-of-web-2-0-still-matter</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Ideas for the Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.last.fm/music/Daniel+H.+Steinberg,+Chris+Adamson/_/The+Community+of+Web+2.0\">https://www.last.fm/music/Daniel+H.+Steinberg,+Chris+Adamson/_/The+Community+of+Web+2.0</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Community manages her self:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://ahrc.ukri.org/documents/project-reports-and-reviews/connected-communities/community-web2-0\">https://ahrc.ukri.org/documents/project-reports-and-reviews/connected-communities/community-web2-0</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/da06be6c920488dafc4785fa00c2f2d4.jpg\" alt=\"\" width=\"344\" height=\"194\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">And this Concept has leading to a Web of Freedom. But all Freedom has the option to be abused. This known all the old Judd. But would it be the right way to install any censorship. No the right option is to act with the Community. The Community should has tools to handle this abusement by her self.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Freedom of the Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">OpenSource and Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://milesberry.net/2006/11/open-source-and-web-20\">http://milesberry.net/2006/11/open-source-and-web-20</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Web2.0 as a Social Movment:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/26462303_Web_20_as_a_Social_Movement\">https://www.researchgate.net/publication/26462303_Web_20_as_a_Social_Movement</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Webology says:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.webology.org/2007/v4n2/a40.html\">http://www.webology.org/2007/v4n2/a40.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Abusement of the Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Death of Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theguardian.com/technology/2011/aug/07/web-2-platform-end-naughton\">https://www.theguardian.com/technology/2011/aug/07/web-2-platform-end-naughton</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Syndication on Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/39673454_Web_20_as_Syndication\">https://www.researchgate.net/publication/39673454_Web_20_as_Syndication</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Influence of Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/228997437_Potential_influence_of_Web_20_usage_and_security_practices_of_online_users_on_information_management\">https://www.researchgate.net/publication/228997437_Potential_influence_of_Web_20_usage_and_security_practices_of_online_users_on_information_management</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/ecb3e09114b8cdd508dbbcd38478b2d0.jpg\" alt=\"\" width=\"344\" height=\"281\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">It&rsquo;s important to give here a Explanation.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We are the Maintainer of the System. A System which others use to gives her ideas the Force to be Listen. So it is important to think about the Aspects of abuse of our System. But we neither has the Force and the right to do any Censorship. But what&rsquo;s then the recommendation way to handle this. We should guide the Society to handle it. But here the Society is more a Set of Individuates, then the System of this Set. The old 68&rsquo;er are called the last the Establishment. We are called it the Industry. And the idea of abuse is not only the Right wing radicalism. The Idea of abuse are often the Idea of all Manipulations of the Freedom of Thinking. You know the old told. The Think should be free and nobody can change this. No this is incorrect everybody has the Force to change it. To change it in the worst way you could Imagine.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Web2.0 Everybody is Listen:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Idea of Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.lifewire.com/what-is-web-2-0-p2-3486624\">https://www.lifewire.com/what-is-web-2-0-p2-3486624</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Closer Look at the Idea of Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.universalclass.com/articles/computers/an-introduction-to-web-2.0.htm\">https://www.universalclass.com/articles/computers/an-introduction-to-web-2.0.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Lots of Reply&rsquo;s to the Idea of the Web2.0:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://gigaom.com/2005/09/28/what-is-web-20\">https://gigaom.com/2005/09/28/what-is-web-20</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How can the Web be Censorship:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Reality of Web Censorship:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.theguardian.com/technology/2013/apr/23/web-censorship-net-closing-in\">https://www.theguardian.com/technology/2013/apr/23/web-censorship-net-closing-in</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How it Works:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://computer.howstuffworks.com/internet-censorship.htm\">https://computer.howstuffworks.com/internet-censorship.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Yes and No of it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://debatewise.org/debates/2566-internet-censorship\">https://debatewise.org/debates/2566-internet-censorship</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Establishment:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia the Establishment:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/The_Establishment\">https://en.wikipedia.org/wiki/The_Establishment</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Key Note I:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://jyx.jyu.fi/bitstream/handle/123456789/63625/1/nonelected%20political%20elites%20nk%201062016.pdf\">https://jyx.jyu.fi/bitstream/handle/123456789/63625/1/nonelected%20political%20elites%20nk%201062016.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Key Note II:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://archive.wilsonquarterly.com/sites/default/files/articles/WQ_VOL2_SU_1978_Article_05.pdf\">http://archive.wilsonquarterly.com/sites/default/files/articles/WQ_VOL2_SU_1978_Article_05.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Right-Way Radicalism in the Web:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How we all became radical:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.macleans.ca/society/technology/how-the-internet-may-be-turning-us-all-into-radicals\">https://www.macleans.ca/society/technology/how-the-internet-may-be-turning-us-all-into-radicals</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Process of radicalism by Web:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://journals.sfu.ca/jd/index.php/jd/article/viewFile/8/8\">http://journals.sfu.ca/jd/index.php/jd/article/viewFile/8/8</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Anti-Radicalism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://cdn.netzpolitik.org/wp-upload/2017/08/MA_KilianVieth_EuropolPolicingtheWeb_finale.pdf\">https://cdn.netzpolitik.org/wp-upload/2017/08/MA_KilianVieth_EuropolPolicingtheWeb_finale.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Joseph_Herbert_Jones_-_1918_Police_Gazette_photograph_(15431215794).jpg\">https://commons.wikimedia.org/wiki/File:Joseph_Herbert_Jones_-_1918_Police_Gazette_photograph_(15431215794).jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/7410fa60dad7b28d67f2e7c272fb79ab.jpg\" alt=\"\" width=\"344\" height=\"456\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So the last Aspect of our Work is it as medicinal Intervention. To change the wrong Believes and Orientation of the Society. The Society as the Set of People who really build it up.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And when you are ask about a Validation of this Art, this is the Justification and Validation of the &ldquo;New-Art&rdquo;.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is the Community:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Article about it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.brown.edu/research/research-ethics/sites/brown.edu.research.research-ethics/files/uploads/Who%20is%20the%20community%20-%20Phil%20Brown_0.pdf\">https://www.brown.edu/research/research-ethics/sites/brown.edu.research.research-ethics/files/uploads/Who%20is%20the%20community%20-%20Phil%20Brown_0.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Meaning of Community:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://medium.com/together-institute/what-does-community-even-mean-a-definition-attempt-conversation-starter-9b443fc523d0\">https://medium.com/together-institute/what-does-community-even-mean-a-definition-attempt-conversation-starter-9b443fc523d0</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Way from Community to Power:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.artofrelevance.org/read-online\">http://www.artofrelevance.org/read-online</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s our Believe:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Difference of Beliefs and Relives;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://writingexplained.org/believes-or-beliefs-difference\">https://writingexplained.org/believes-or-beliefs-difference</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Believes of Methodist Church:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.umc.org/en/what-we-believe/basics-of-our-faith\">https://www.umc.org/en/what-we-believe/basics-of-our-faith</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Believes of Church of England:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.churchofengland.org/our-faith/what-we-believe\">https://www.churchofengland.org/our-faith/what-we-believe</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Peribleptos_Ochrid.JPG\">https://commons.wikimedia.org/wiki/File:Peribleptos_Ochrid.JPG</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/437c6c1b29ae7bfa0a1ac46da1abd72c.jpg\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So we have to show the Consumer the possibilities of his Acting. We must teach him to find the really True behind the great Show. We have also to show him alternatives for his well-trodden paths. So we multiply his Possibilities of Existence and Acting. This is the cultural Responsibility of &ldquo;New Art&rdquo;.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Responsibility of Philosophers:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Philosophy of Responsibility:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.iep.utm.edu/responsi\">https://www.iep.utm.edu/responsi</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Responsibility:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophynow.org/issues/56/What_is_Responsibility\">https://philosophynow.org/issues/56/What_is_Responsibility</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Fake News?</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://mgpiety.org/2017/01/29/fake-news-and-the-responsibility-of-philosophers\">https://mgpiety.org/2017/01/29/fake-news-and-the-responsibility-of-philosophers</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Responsibility of Artist:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Page of it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://maritain.nd.edu/jmc/etext/resart1.htm\">https://maritain.nd.edu/jmc/etext/resart1.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">James Baldwin:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blogs.baruch.cuny.edu/jamesbaldwin/?p=618\">https://blogs.baruch.cuny.edu/jamesbaldwin/?p=618</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Role of Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://blog.artweb.com/art-and-culture/the-role-of-an-artist\">https://blog.artweb.com/art-and-culture/the-role-of-an-artist</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The True behind the great Show:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Matrix the True:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.vulture.com/2019/02/the-matrix-built-our-reality-denying-world.html\">https://www.vulture.com/2019/02/the-matrix-built-our-reality-denying-world.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Matrix the True II:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.vulture.com/2019/02/what-the-matrix-predicted-about-life-in-2019.html\">https://www.vulture.com/2019/02/what-the-matrix-predicted-about-life-in-2019.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Matrix the Philosopher Stone:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://medium.com/the-philosophers-stone/the-matrix-the-value-of-reality-b0fe7066cc6\">https://medium.com/the-philosophers-stone/the-matrix-the-value-of-reality-b0fe7066cc6</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Possibility of Acting:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We are all creative:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.actorstheatreworkshop.com/tes_essay5\">https://www.actorstheatreworkshop.com/tes_essay5</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We are all Actors:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://link.springer.com/content/pdf/10.1007/s11245-018-9624-7.pdf\">https://link.springer.com/content/pdf/10.1007/s11245-018-9624-7.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia the Psychology of Egoism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Psychological_egoism\">https://en.wikipedia.org/wiki/Psychological_egoism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Freedom_Press_door.jpg\">https://commons.wikimedia.org/wiki/File:Freedom_Press_door.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/38a129c130eeb8fb1b58126885021297.jpg\" alt=\"\" width=\"344\" height=\"459\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">John Cage:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is John Cage:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.pbs.org/wnet/americanmasters/john-cage-about-the-composer/471\">http://www.pbs.org/wnet/americanmasters/john-cage-about-the-composer/471</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Listen to John Cage:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.last.fm/music/John+Cage\">https://www.last.fm/music/John+Cage</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia Algorithm Composition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Algorithmic_composition\">https://en.wikipedia.org/wiki/Algorithmic_composition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And now we coming to the last important Person who inspire us. We come to a person who is really the Inventor of Sociological Art.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Sociological Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Sociological_art\">https://en.wikipedia.org/wiki/Sociological_art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Work of John Cage:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.sterneck.net/john-cage\">http://www.sterneck.net/john-cage</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art dedicated to or from John Cage:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.artnet.de/k%C3%BCnstler/john-cage\">http://www.artnet.de/k%C3%BCnstler/john-cage</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:John_Cage_(1988).jpg\">https://commons.wikimedia.org/wiki/File:John_Cage_(1988).jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/04ce7c802ba05448b2d010296ba5b745.jpg\" alt=\"\" width=\"344\" height=\"344\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">John Cage is even our Idol. And he has been inspired by sociological Processes. And he has develop the sociological basement of our &ldquo;New-Art&rdquo;. He has explore the Social Semantic Dimension of Art.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Sociological:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Dictionary Entry of Sociological:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.merriam-webster.com/dictionary/sociological\">https://www.merriam-webster.com/dictionary/sociological</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Department of Sociological:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://sociology.unc.edu/undergraduate-program/sociology-major/what-is-sociology\">https://sociology.unc.edu/undergraduate-program/sociology-major/what-is-sociology</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Asanet.org:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.asanet.org/\">https://www.asanet.org</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Sociological Processes:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Overview Article of Sociological Processes:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.yourarticlelibrary.com/sociology/social-processes-the-meaning-types-characteristics-of-social-processes/8545\">http://www.yourarticlelibrary.com/sociology/social-processes-the-meaning-types-characteristics-of-social-processes/8545</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition on Sociological:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thoughtco.com/what-is-sociology-3026639\">https://www.thoughtco.com/what-is-sociology-3026639</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Overview to Sociological Theory&rsquo;s:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thoughtco.com/sociology-research-and-statistics-s2-3026650\">https://www.thoughtco.com/sociology-research-and-statistics-s2-3026650</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Sociological Processes in Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Sociological Aspects of Music:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/267927415_The_Sociology_of_Music\">https://www.researchgate.net/publication/267927415_The_Sociology_of_Music</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Ethno Sociological Study:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.sciencedirect.com/science/article/pii/S0304422X04000257\">https://www.sciencedirect.com/science/article/pii/S0304422X04000257</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A new Sociology of Music by Meditation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://halshs.archives-ouvertes.fr/halshs-00193130/document\">https://halshs.archives-ouvertes.fr/halshs-00193130/document</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s this in the Work of John Cage:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Work of John Cage:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/biography/John-Cage\">https://www.britannica.com/biography/John-Cage</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Jahsonic.com</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://jahsonic.com/SociologyMusic.html\">https://jahsonic.com/SociologyMusic.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">John Cages Sociology ???</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://hermitary.com/solitude/cage.html\">http://hermitary.com/solitude/cage.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:P1510419_%E2%80%9EWas_uns_verbindet%E2%80%9C_(16852859335).jpg\">https://commons.wikimedia.org/wiki/File:P1510419_%E2%80%9EWas_uns_verbindet%E2%80%9C_(16852859335).jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/6fd2ce4a40254747fd0a65050d92a441.jpg\" alt=\"\" width=\"344\" height=\"173\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">At this Point we should different the Aspect of Semantic Relations. This are:</p>\n<ol>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Sociological View &ndash; The Question which is the Relation between a Percepitioner and me.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Psychological View &ndash; Which is my own Relation to the Work.</p>\n</li>\n<li>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And the pure Semantic &ndash; Which Definition of Relation would have a Work.</p>\n</li>\n</ol>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Psychology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Overview:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.simplypsychology.org/whatispsychology.html\">https://www.simplypsychology.org/whatispsychology.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Second Overview:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.psychology.org.au/for-the-public/about-psychology/What-is-psychology\">https://www.psychology.org.au/for-the-public/about-psychology/What-is-psychology</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Introduction:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellmind.com/psychology-4014660\">https://www.verywellmind.com/psychology-4014660</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Sociological versus Psychological:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Introduction:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.saintleo.edu/blog/online-psychology-degree-vs.-sociology-what-s-the-difference-infographic\">https://www.saintleo.edu/blog/online-psychology-degree-vs.-sociology-what-s-the-difference-infographic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What should I study:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ashford.edu/online-degrees/health-care/sociology-vs-psychology-which-bachelors-degree\">https://www.ashford.edu/online-degrees/health-care/sociology-vs-psychology-which-bachelors-degree</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Or a Connection:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/2580444?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/2580444?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Psychology.jpg\">https://commons.wikimedia.org/wiki/File:Psychology.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">--------<img src=\"/media/uploads/user/56f21d2ad865fea66686bfbdfe15a515.jpg\" alt=\"\" width=\"344\" height=\"451\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">The last Point need some Explanation:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">In the Past the most Semantic defined Work was the Bible.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">We will now not discuss the thru of the Legacy of Good and Jesus Christs. We make the simplification to call it a Religious Poem. We can say it was a agreement of the Philosophical View of a Society. A Agreement of a Society created at a Point in Time and Space. At the First Time in the ancient and later the Medieval. From the ancient Society of Jews to the circle of the Prophet Jesus Christs. But it has have in the Medieval a higher and absolutist authority. It has had more authority as the experimental science of Physics. So remember at Galileo Galilee.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Authority of the Bible:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Bible.org:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://bible.org/seriespage/chapter-one-authority-bible\">https://bible.org/seriespage/chapter-one-authority-bible</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Pat Zukeran:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.leaderu.com/orgs/probe/docs/auth-bib.html\">http://www.leaderu.com/orgs/probe/docs/auth-bib.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Philosophical about it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/231955322_The_meaning_of_the_authority_of_the_Bible\">https://www.researchgate.net/publication/231955322_The_meaning_of_the_authority_of_the_Bible</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Religion of The old Jews:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Jews\">https://en.wikipedia.org/wiki/Jews</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Lexicon Entry of Jews:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/Jew-people\">https://www.britannica.com/topic/Jew-people</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Jews Library:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jewishvirtuallibrary.org/\">https://www.jewishvirtuallibrary.org</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Religion of The Christian:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Christianity\">https://en.wikipedia.org/wiki/Christianity</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Lexicon Entry of Christian:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/Christianity\">https://www.britannica.com/topic/Christianity</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Web-Page about Christian:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.bbc.co.uk/religion/religions/christianity/index.shtml\">http://www.bbc.co.uk/religion/religions/christianity/index.shtml</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Ancient Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">An Entry Article:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ancient.eu/philosophy\">https://www.ancient.eu/philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Whats Ancient Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.wisegeek.com/what-is-ancient-philosophy.htm\">https://www.wisegeek.com/what-is-ancient-philosophy.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia tells:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Ancient_Greek_philosophy\">https://en.wikipedia.org/wiki/Ancient_Greek_philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Medieval Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Short Article of Medieval Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://philosophynow.org/issues/50/An_Introduction_to_Medieval_Philosophy\">https://philosophynow.org/issues/50/An_Introduction_to_Medieval_Philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Closer Look to Medieval Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://plato.stanford.edu/entries/medieval-philosophy\">https://plato.stanford.edu/entries/medieval-philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Lexicon Entry:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/Western-philosophy/Medieval-philosophy\">https://www.britannica.com/topic/Western-philosophy/Medieval-philosophy</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/c60f2ecd68b47bf95a7ff4b009bbf977.jpg\" alt=\"\" width=\"344\" height=\"334\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So when we ask about the Boundary&rsquo;s of Mind we must ask about the Boundary&rsquo;s of Cognition. Just we have to explore the Art and Music as a medical invention to a Illness. A Illness of Cognition.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So at this point I&rsquo;m must fell to be ashamed. To be ashamed to crib from a Religious Society. A Religious Society whit the worsted Reputation one can have. I&rsquo;m speaking about Scientology.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">I will concentrate me just rather of there pretending or serious Theory. And not of the Business of this Society.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Intro into Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://courses.lumenlearning.com/wmopen-psychology/chapter/what-is-cognition\">https://courses.lumenlearning.com/wmopen-psychology/chapter/what-is-cognition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Behavior of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.cambridgecognition.com/blog/entry/what-is-cognition\">https://www.cambridgecognition.com/blog/entry/what-is-cognition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/334315921_What_is_cognition\">https://www.researchgate.net/publication/334315921_What_is_cognition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Who is Scientology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Neutral View of Theory:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Scientology_beliefs_and_practices\">https://en.wikipedia.org/wiki/Scientology_beliefs_and_practices</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Scientology.org:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.whatisscientology.org/\">http://www.whatisscientology.org</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Phds View ?</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://web.archive.org/web/20051014003704/http://www.humanrights-germany.org/experts/eng/flinn01.pdf\">https://web.archive.org/web/20051014003704/http://www.humanrights-germany.org/experts/eng/flinn01.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Worse Reputation of Scientology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is Scientology Dangerous:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.irishtimes.com/opinion/is-scientology-a-dangerous-cult-1.917839\">https://www.irishtimes.com/opinion/is-scientology-a-dangerous-cult-1.917839</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia Controversy of Scientology:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Scientology_controversies\">https://en.wikipedia.org/wiki/Scientology_controversies</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Ask the Time:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thetimes.co.uk/article/is-scientology-dangerous-78dqd9m65db\">https://www.thetimes.co.uk/article/is-scientology-dangerous-78dqd9m65db</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:The_new_Church_of_Scientology_Twin_Cities_Ideal_Organization.jpg\">https://commons.wikimedia.org/wiki/File:The_new_Church_of_Scientology_Twin_Cities_Ideal_Organization.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/d84185ded1222261af8d94e322d4d9b6.jpg\" alt=\"\" width=\"344\" height=\"514\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So we should talk about the Concept of Engram&rsquo;s. What are a Engram is easily explained by the Cycle of Cognition. Or the Cycle of Maja the Hindus called it.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Process of Cognition is not a One-Way.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Every Process of Cognition is placed in the Context of the Cognition&rsquo;s in his Past.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Scientologist are called this the Cognition which building up the Engram. Old Cognition&rsquo;s are leading to a Enagram in Relations to new. It is in there Theory a Syndrome of the Stress of Cognition. A often used Citation of Albert Einstein is that we only use 10% of our Intellectually Force. So what&rsquo;s with the other 90%. Scientology means this remained Force is blocked by a Engram. But we say it&rsquo;s used to be by Consciousness. So would you try to break through this Boundary you must lost your Consciousness . You are acting. But you don&rsquo;t anymore think about it. You are full in the Force of Acting. It&rsquo;s like a meditation of a Shaolin monk which try to concentrate his Force to break a Tree Trunk. It&rsquo;s good to have this power to Act. But it leads not to the Sense of Acting.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Cycle of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Time of the Cycle of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3081809/pdf/pone.0014803.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3081809/pdf/pone.0014803.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Effect of the Cycle of Cognition of Adults:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6388745/pdf/pone.0211779.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6388745/pdf/pone.0211779.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Already Being Theory of the Cycle of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.jfsowa.com/pubs/cogcycle.pdf\">http://www.jfsowa.com/pubs/cogcycle.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Engram:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of Engram:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://study.com/academy/lesson/what-is-an-engram-definition-history.html\">https://study.com/academy/lesson/what-is-an-engram-definition-history.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Theory of Engram:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.annualreviews.org/doi/10.1146/annurev.psych.55.090902.142050\">https://www.annualreviews.org/doi/10.1146/annurev.psych.55.090902.142050</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Dangerous use of Engrams:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://io9.gizmodo.com/memory-implantation-is-now-officially-real-909746570\">https://io9.gizmodo.com/memory-implantation-is-now-officially-real-909746570</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Cognition&rsquo;s are bound by here past:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Boundary&rsquo;s of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/228786643_The_Bounds_of_Cognition\">https://www.researchgate.net/publication/228786643_The_Bounds_of_Cognition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Analogy as Core of Cognition:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/228786643_The_Bounds_of_Cognition\">https://www.researchgate.net/publication/228786643_The_Bounds_of_Cognition</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Past in the Present:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/2800799?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/2800799?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Einstein: you only use 10% of your mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Did he it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.um.edu.mt/think/did-albert-einstein-say-we-only-use-10-of-our-brain\">https://www.um.edu.mt/think/did-albert-einstein-say-we-only-use-10-of-our-brain</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Did we it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nib.com.au/the-checkup/future-happenings/do-we-only-use-10-of-our-brains\">https://www.nib.com.au/the-checkup/future-happenings/do-we-only-use-10-of-our-brains</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A more analytical Article:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://getsetflyscience.com/interesting-facts/use-100-percent-brain\">https://getsetflyscience.com/interesting-facts/use-100-percent-brain</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So it is a double knifed Sword.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Vestibular_cortices_and_spatial_cognition.jpg\">https://commons.wikimedia.org/wiki/File:Vestibular_cortices_and_spatial_cognition.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/79701f9c07c36f528f706559a968bf2d.png\" alt=\"\" width=\"344\" height=\"344\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So what should you do to Keep track and order of your Thinking. The only solution is to use and not to fight against the reactive Mind. You must give the most of your Intellectually Processes of thinking in a Automated Background. So you also not think about every Step you are doing to walk. Would you have Consciousness about every Step you often don&rsquo;t walk to Work. But you have the power to make powerful steps. You must decide which is your belonging. So this is a little bit controversial to Scientology. You must not Fight against it you must do the right use of it.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But the Cycle of Cognition is not even a Process in the time of live. it is even a Process before the live is coming to Being for you.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So the Structure of the Brain is Designed by the Evolution of Live. So this Structures are Structures of a Process in the fight of live. A Fight which is now running science Millennium&rsquo;s. And so we are starting with a Constitutive Set of Engram&rsquo;s.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The statistical mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the <a href=\"https://www.simplypsychology.org/unconscious\">unconscious</a> mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.simplypsychology.org/unconscious-mind.html\">https://www.simplypsychology.org/unconscious-mind.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Freud the inventor of this mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.verywellmind.com/what-is-the-unconscious-2796004\">https://www.verywellmind.com/what-is-the-unconscious-2796004</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A short Theory of it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2440575/pdf/nihms-49128.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2440575/pdf/nihms-49128.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Automatically Thinking:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Toxic Ideas:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.inc.com/amy-morin/7-thinking-patterns-that-will-that-rob-you-of-mental-strength-and-what-you-can-do-about-them.html\">https://www.inc.com/amy-morin/7-thinking-patterns-that-will-that-rob-you-of-mental-strength-and-what-you-can-do-about-them.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Catastrophic Thinking:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.psychologytoday.com/us/blog/in-the-face-adversity/201103/catastrophic-thinking\">https://www.psychologytoday.com/us/blog/in-the-face-adversity/201103/catastrophic-thinking</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Critical Thinking:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://fee.org/articles/critical-thinking-doesnt-mean-what-most-people-think\">https://fee.org/articles/critical-thinking-doesnt-mean-what-most-people-think</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Automatically Writing.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Spiritual Version:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.skepdic.com/autowrite.html\">http://www.skepdic.com/autowrite.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Example of a automatic Drawing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://web.archive.org/web/20071221083112/http://www.usc.edu/schools/annenberg/asc/projects/comm544/library/images/322.html\">https://web.archive.org/web/20071221083112/http://www.usc.edu/schools/annenberg/asc/projects/comm544/library/images/322.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s an automatic Drawing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.biroco.com/automatic.htm\">https://www.biroco.com/automatic.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Surrealism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Surrealism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/art/Surrealism\">https://www.britannica.com/art/Surrealism</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Surrealism Web-Page:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.surrealismart.org/\">http://www.surrealismart.org</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Wonderful Web-Page about it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.parkwestgallery.com/what-is-surrealism-art\">https://www.parkwestgallery.com/what-is-surrealism-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Harlequin_in_the_Land_of_Giants_-_Cesare_Catania_contemporary_painter_-_iperealism_and_surrealism_and_art.png\">https://commons.wikimedia.org/wiki/File:Harlequin_in_the_Land_of_Giants_-_Cesare_Catania_contemporary_painter_-_iperealism_and_surrealism_and_art.png</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/3570a42e2613283fca1bb1912a3a0fe3.jpg\" alt=\"\" width=\"344\" height=\"422\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">Pain is the most basically Concepts of these Engram&rsquo;s. You are not living without Pain. Pain is the drive for the most of People. And Pain is the Constitutive drive of all Animals. It is the Source of Activity.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">When we would not have the Pain to eat something. We will going to Stave.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">When we wouldn&rsquo;t have the Pain for Breathing we not anymore breathing.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And when we not have the Pain to be Scared to Die, the most people would make her suicide.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Engram of Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Scientology about Engrams:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Engram_(Dianetics\">https://en.wikipedia.org/wiki/Engram_(Dianetics</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Criticism on this:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.skepdic.com/dianetic.html\">http://www.skepdic.com/dianetic.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The real search of Engram:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2895151/pdf/0350221.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2895151/pdf/0350221.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why Important is Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Body Emotion on Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.health24.com/Medical/Pain-Management/About-pain/The-importance-of-feeling-pain-20140604\">https://www.health24.com/Medical/Pain-Management/About-pain/The-importance-of-feeling-pain-20140604</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why Good has created Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://greatergood.berkeley.edu/article/item/the_importance_of_pain\">https://greatergood.berkeley.edu/article/item/the_importance_of_pain</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Pain Center .org:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thepaincenter.com/\">https://www.thepaincenter.com</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Pierrot_in_pain,_ca._1854%E2%80%9355,_by_Nadar.jpg\">https://commons.wikimedia.org/wiki/File:Pierrot_in_pain,_ca._1854%E2%80%9355,_by_Nadar.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">--------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/d38375ca28ec4396d494622782b2077f.jpg\" alt=\"\" width=\"344\" height=\"596\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So the most modern Philosophers are thinking this is a System of Pain.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A System which are adequate for Animals. But it isn&rsquo;t rather adequate for the dignity of Human.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So they will remove this System.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And they often tell then the only true Problem of Philosophy is the Question of Suicide.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What a wonderful Dignity of Humanity.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why has Good created Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why gives the Lord the Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.homeofgrace.org/blog/why-does-god-allow-pain-hint-its-one-of-his-greatest-blessings\">https://www.homeofgrace.org/blog/why-does-god-allow-pain-hint-its-one-of-his-greatest-blessings</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How much Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.forbes.com/sites/quora/2018/04/11/why-can-some-people-tolerate-pain-so-much-more-than-others/#2b1294e83aa4\">https://www.forbes.com/sites/quora/2018/04/11/why-can-some-people-tolerate-pain-so-much-more-than-others/#2b1294e83aa4</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why can Pain be Good:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.bbc.com/future/article/20150827-why-does-it-hurt-so-much-to-hit-your-funny-bone\">https://www.bbc.com/future/article/20150827-why-does-it-hurt-so-much-to-hit-your-funny-bone</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Psychological Sense of Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Psychology of Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://williams.medicine.wisc.edu/painpsychology.pdf\">http://williams.medicine.wisc.edu/painpsychology.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Psychological Aspects of Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/25000837\">https://www.ncbi.nlm.nih.gov/pubmed/25000837</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Concept of Mental Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.karger.com/Article/PDF/343003\">https://www.karger.com/Article/PDF/343003</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Question of Suicide in Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Suicide as a Question of Philosophy:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://thefloatinglibrary.com/2009/04/20/suicide-the-one-truly-serious-philosophical-problem-camus\">https://thefloatinglibrary.com/2009/04/20/suicide-the-one-truly-serious-philosophical-problem-camus</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Philosophy_of_suicide\">https://en.wikipedia.org/wiki/Philosophy_of_suicide</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Theory of Suicide:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1029229\">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1029229</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Suicide_of_Lucretia--c.1575--by_Luca_Cambiaso--Blanton_Museum_of_Art--Austin_TX.jpg\">https://commons.wikimedia.org/wiki/File:Suicide_of_Lucretia--c.1575--by_Luca_Cambiaso--Blanton_Museum_of_Art--Austin_TX.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/ff0577d1c873afa6dd2ca864a87129d0.png\" alt=\"\" width=\"344\" height=\"370\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But what&rsquo;s the Opposite to this?</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">At the Opposite is our slave and servant the Computer. A Computer or his Program is not a System of Pain. He is absolutely Painless.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And so he are not concerned about the validity of his Source of Energy.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">When you plugging him from his Socket he only will stop to Process. But he has no Pain.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Even when you destroy his Motherboard, he only go worse in future more Processing. But he doesn't have any Pain about this.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The True Servant is a Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Servant Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.servant.net.au/#/home\">http://www.servant.net.au/#/home</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is it Good to have the Computer as Servant:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://themodernparent.net/technology-is-a-great-servant-but-a-bad-master-help-kids-take-control\">https://themodernparent.net/technology-is-a-great-servant-but-a-bad-master-help-kids-take-control</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Servant Computer in Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.flickr.com/photos/anokarina/26520726735\">https://www.flickr.com/photos/anokarina/26520726735</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Computer has no pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why Computers can&rsquo;t feel Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.jstor.org/stable/20115302?seq=1#metadata_info_tab_contents\">https://www.jstor.org/stable/20115302?seq=1#metadata_info_tab_contents</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wikipedia what&rsquo;s pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Pain\">https://en.wikipedia.org/wiki/Pain</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Can a Computer feel Pain:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/Consciousness-Can-a-running-computer-program-feel-pain\">https://www.quora.com/Consciousness-Can-a-running-computer-program-feel-pain</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Computer has no drives ( in the Sense of Freud ):</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Computer used Instincts:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.macleans.ca/society/technology/how-a-computer-used-gut-instinct-to-win-at-no-limit-texas-hold-em\">https://www.macleans.ca/society/technology/how-a-computer-used-gut-instinct-to-win-at-no-limit-texas-hold-em</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Are Computer smarter then Human:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://time.com/4960778/computers-smarter-than-humans\">https://time.com/4960778/computers-smarter-than-humans</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Instincts:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/instinct\">https://www.britannica.com/topic/instinct</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">--------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/b13160fed42a9c6fd2273b9cbde1815e.jpg\" alt=\"\" width=\"344\" height=\"258\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">So a Computer putted in his Camber for Calculators will doing Nothing.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">He does Nothing. And only he can does Nothing. The Human can not do Nothing in Because of his System of Pain. He needs to reduce the Pain he recognize.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">It is in Because of this he has nothing to do.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What would a computer do in freedom?:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The App to do Nothing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://freedom.to/\">https://freedom.to</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">About the App to do Nothing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://freedom.to/blog/freedom-101-faqs-answered\">https://freedom.to/blog/freedom-101-faqs-answered</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Whats Software Freedom:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://copyleft.org/guide/comprehensive-gpl-guidech2.html\">https://copyleft.org/guide/comprehensive-gpl-guidech2.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What is the Doing of Nothing at Human:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Be a Human who is Doing Nothing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://thoughtcatalog.com/ryan-holiday/2015/08/human-being-not-human-doing\">https://thoughtcatalog.com/ryan-holiday/2015/08/human-being-not-human-doing</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the true Resident Mind / Evil:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.brainfacts.org/Brain-Anatomy-and-Function/Cells-and-Circuits/2019/What-Your-Brain-Does-When-Youre-Doing-Nothing-010919\">https://www.brainfacts.org/Brain-Anatomy-and-Function/Cells-and-Circuits/2019/What-Your-Brain-Does-When-Youre-Doing-Nothing-010919</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Zen the Art of Nothing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ashwita.com/zen/meditation\">https://www.ashwita.com/zen/meditation</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Can the Human do Nothing:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Is Zen Worth it:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/Is-zen-worth-it-Why-or-why-not\">https://www.quora.com/Is-zen-worth-it-Why-or-why-not</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A first Dive into Zen:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.madore.org/~david/zen\">http://www.madore.org/~david/zen</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Pure-Land:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/Buddhism/Pure-Land\">https://www.britannica.com/topic/Buddhism/Pure-Land</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">---------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/9c083dd95ff6bc9fa3ab5563aee30b6b.jpg\" alt=\"\" width=\"344\" height=\"229\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">Firstly with the given of a Calculation-Command he will begin to Process anything.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Anything in Relation of this Program. Anything which is like the beating of our heart.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">But when you will build a Computer with Consciousness, you must give him a Principally System of Instructions. And this System is the System of Pain. Because the best Definition of Pain is the Drive to serve or promote Circumstances of the own Being.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">So will you have a living Human, this Human must have a System of Pain.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">It must have a System which leads him to make decisions.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">He must make Decisions about the fulfillment of intellectually Drive and Meaning or Sense.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Dos the Computer always what the Code command:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The DOS-Command Prompt:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.techopedia.com/definition/26107/dos-command-prompt\">https://www.techopedia.com/definition/26107/dos-command-prompt</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Speak with the Computer in Code:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nature.com/articles/d41586-018-05588-x\">https://www.nature.com/articles/d41586-018-05588-x</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s a Computer Command:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://en.wikipedia.org/wiki/Command_(computing\">https://en.wikipedia.org/wiki/Command_(computing</a>)</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Challenge to Program a Computer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s the Challenge to become a Coder:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/What-are-the-challenges-to-become-a-Programmer\">https://www.quora.com/What-are-the-challenges-to-become-a-Programmer</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What can the Programmer do at a Day:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.quora.com/What-are-the-greatest-challenges-that-programmers-face-on-a-daily-basis\">https://www.quora.com/What-are-the-greatest-challenges-that-programmers-face-on-a-daily-basis</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Challenge to learn the most:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.efrontlearning.com/blog/2017/04/common-training-challenges-solutions.html\">https://www.efrontlearning.com/blog/2017/04/common-training-challenges-solutions.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Glorious of Computer Hacking:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Hacking a Scientific Theory:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://projects.fivethirtyeight.com/p-hacking\">https://projects.fivethirtyeight.com/p-hacking</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">How to Hacking a Computer-Game:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://thestone.zone/hacking/2019/01/01/hacking-quest-for-glory.html\">http://thestone.zone/hacking/2019/01/01/hacking-quest-for-glory.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Glory Hackers:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.makeuseof.com/tag/5-of-the-worlds-most-famous-hackers-what-happened-to-them\">https://www.makeuseof.com/tag/5-of-the-worlds-most-famous-hackers-what-happened-to-them</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:WikidataCon_2019_-_2019-10-35_-_hacking_room.jpg\">https://commons.wikimedia.org/wiki/File:WikidataCon_2019_-_2019-10-35_-_hacking_room.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/f3086479265745f1c977e9ed90c0b020.jpg\" alt=\"\" width=\"344\" height=\"237\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">But this Concept is a Trap Door to the Brain of Human. As Above discussed about the Freedom of the Web 2.0. This Trap Door can be abused.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The most working trap is the Trap of the slowness of Evolution.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Evolution is a too long march through the instances. Too long for the most Aspects of modern day everyday life. So here a first Example for the Problem of not adequate running Evolution. The Problem is right but it could be abused. It&rsquo;s easily abused in Because only the Evolution can show the right way of development of Live. Or about the development of the world.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s biological Evolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Overview of Thesis to biological Evolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.sciencedirect.com/topics/earth-and-planetary-sciences/biological-evolution\">https://www.sciencedirect.com/topics/earth-and-planetary-sciences/biological-evolution</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Introduction to Evolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.thoughtco.com/biological-evolution-373416\">https://www.thoughtco.com/biological-evolution-373416</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A deeper Dive in Evolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://bioinformatica.uab.es/divulgacio/biological_evolution.html\">http://bioinformatica.uab.es/divulgacio/biological_evolution.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s technical Evolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Prophetic about technical Evolution to tomorrow:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.digitalistmag.com/cio-knowledge/2019/02/26/evolution-of-technology-continues-what-is-next-in-2019-06196611\">https://www.digitalistmag.com/cio-knowledge/2019/02/26/evolution-of-technology-continues-what-is-next-in-2019-06196611</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Technical Evolution as Business:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.strategy-business.com/article/00014?gko=6fa6d\">https://www.strategy-business.com/article/00014?gko=6fa6d</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Ideas for the Technique of Tomorrow:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://bigthink.com/hybrid-reality/the-evolution-of-technology\">https://bigthink.com/hybrid-reality/the-evolution-of-technology</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Evolution of Mind is coming more technical:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">It needs more the Selection by Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.forbes.com/sites/johnfarrell/2018/05/06/how-the-evolution-of-the-mind-needed-more-than-natural-selection/#6dc5a46918ca\">https://www.forbes.com/sites/johnfarrell/2018/05/06/how-the-evolution-of-the-mind-needed-more-than-natural-selection/#6dc5a46918ca</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Problem of Evolution on Mind:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01537/full\">https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01537/full</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Evolution on Consciousness:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3385676/pdf/rstb20120111.pdf\">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3385676/pdf/rstb20120111.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Mother Nature does Evolution as Best:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Evolution of the Human:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.independent.co.uk/news/long_reads/science-and-technology/human-evolution-beyond-nature-a9322296.html\">https://www.independent.co.uk/news/long_reads/science-and-technology/human-evolution-beyond-nature-a9322296.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Science learning by Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nature.com/scitable/knowledge/evolution-13228138\">https://www.nature.com/scitable/knowledge/evolution-13228138</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Bionic:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.collinsdictionary.com/dictionary/english/bionic\">https://www.collinsdictionary.com/dictionary/english/bionic</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Bras_bionic_Danny_Letain_%27be_bionic%27.jpg\">https://commons.wikimedia.org/wiki/File:Bras_bionic_Danny_Letain_%27be_bionic%27.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/8352be8ec5f378dabdbfc5a70a44cf70.jpg\" alt=\"\" width=\"344\" height=\"241\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">This simplest Example is the reaction of Fight and Escape in a Situation of Stress.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And we all know this wrong Reaction of our Mind. It isn&rsquo;t in our days the right reaction to choose Fight or Flay. Especially in the Situation of a Examination like for the High School Degree.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And this is sadly the Thru about much of this Structures of the System of Pain. But to expect to clean our Acting has not so much recommendation. It&rsquo;s Because in the mostly ways its even the correct reaction.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wrong Behavior by natural Evolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Instinct of Moral:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://kids.frontiersin.org/article/10.3389/frym.2016.00003\">https://kids.frontiersin.org/article/10.3389/frym.2016.00003</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Wrong Understanding of Evolution:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://evolution.berkeley.edu/evolibrary/misconceptions_faq.php\">https://evolution.berkeley.edu/evolibrary/misconceptions_faq.php</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Behavior of Stress:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.researchgate.net/publication/303791745_Evolutionary_Origins_and_Functions_of_the_Stress_Response_System\">https://www.researchgate.net/publication/303791745_Evolutionary_Origins_and_Functions_of_the_Stress_Response_System</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Technique to compensate Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Morality to change Nature:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nature.com/articles/d41586-019-01906-z\">https://www.nature.com/articles/d41586-019-01906-z</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Example of Technique:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nature.com/articles/s41598-019-39415-8.pdf\">https://www.nature.com/articles/s41598-019-39415-8.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Second Example of Technique:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nature.com/articles/s41586-019-1711-4\">https://www.nature.com/articles/s41586-019-1711-4</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Techniques for Handling Stress:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Techniques for reducing Stress:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.health.harvard.edu/mind-and-mood/six-relaxation-techniques-to-reduce-stress\">https://www.health.harvard.edu/mind-and-mood/six-relaxation-techniques-to-reduce-stress</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Relaxing Techniques:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.helpguide.org/articles/stress/relaxation-techniques-for-stress-relief.htm\">https://www.helpguide.org/articles/stress/relaxation-techniques-for-stress-relief.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Stress Management:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.webmd.com/balance/stress-management/stress-management\">https://www.webmd.com/balance/stress-management/stress-management</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">----------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/ddbaf45d2fc9bfe33b4f1358268e5af4.jpg\" alt=\"\" width=\"344\" height=\"506\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">And another Problem is the working of Manipulation.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The System / the so called Industry has the tendency to manipulating our Thinking. It attacks the Circle on Cognition.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The modern people are exposed to many of this manipulations.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">It begins by Propaganda and goes from Suggestions to lesser political and sociological Suggestions.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">One of this middle powered Suggestion are the advertisement.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of the Term Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.merriam-webster.com/dictionary/propaganda\">https://www.merriam-webster.com/dictionary/propaganda</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Better Definition of Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"http://www.businessdictionary.com/definition/propaganda.html\">http://www.businessdictionary.com/definition/propaganda.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Lexical Entry of Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/propaganda\">https://www.britannica.com/topic/propaganda</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s are Manipulating political Systems:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Manipulation Techniques:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://businessmotivationfamily.com/7-political-manipulation-techniques-exposed\">https://businessmotivationfamily.com/7-political-manipulation-techniques-exposed</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Political Manipulations with Examples:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://thepowermoves.com/the-psychology-of-political-persuasion\">https://thepowermoves.com/the-psychology-of-political-persuasion</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Election Campaign:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://bookonlime.ru/lecture/4-political-manipulation-election-campaign\">https://bookonlime.ru/lecture/4-political-manipulation-election-campaign</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Consumer Advertising:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Advertising:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.wisegeek.com/what-is-consumer-advertising.htm\">https://www.wisegeek.com/what-is-consumer-advertising.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Theory of Consumerism:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://2012books.lardbucket.org/books/business-ethics/s16-03-we-buy-therefore-we-are-consum.html\">https://2012books.lardbucket.org/books/business-ethics/s16-03-we-buy-therefore-we-are-consum.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Mobile Advertising:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.mmaglobal.com/mena/articles/mobile-advertising-whats-it-consumer\">https://www.mmaglobal.com/mena/articles/mobile-advertising-whats-it-consumer</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:OkeH_Records_Advertising_-_Mamie_Smith._January_1921.jpg\">https://commons.wikimedia.org/wiki/File:OkeH_Records_Advertising_-_Mamie_Smith._January_1921.jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">-------</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><img src=\"/media/uploads/user/fbea443e46581281e887b4dc4ed2815c.jpg\" alt=\"\" width=\"344\" height=\"428\" /></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%; page-break-before: always;\">For the first mode of Suggestion the System of NS ( National-Socialism ) can be a Example. A Example which has important lead to the Holocaust. And at this you should see the Dangerous.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">And for the lower case of Suggestion is the syndrome of inferiority a Example. The Syndrome of unemployment people which has not anymore any Dignity. And this in a System which purposely leading to always lesser Employment for ever more Humans.</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">NS Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Holocaust Encyclopedic explains:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://encyclopedia.ushmm.org/content/en/article/nazi-propaganda\">https://encyclopedia.ushmm.org/content/en/article/nazi-propaganda</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Material of Nazi Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://research.calvin.edu/german-propaganda-archive\">https://research.calvin.edu/german-propaganda-archive</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Example by Joseph Goebbels:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://research.calvin.edu/german-propaganda-archive/goeb73.htm\">https://research.calvin.edu/german-propaganda-archive/goeb73.htm</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Art in NS Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Lexicon Entry of degenerated Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/art/degenerate-art\">https://www.britannica.com/art/degenerate-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">What&rsquo;s Degenerated Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.tate.org.uk/art/art-terms/d/degenerate-art\">https://www.tate.org.uk/art/art-terms/d/degenerate-art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Definition of Degenerated Art:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.definitions.net/definition/degenerate%20art\">https://www.definitions.net/definition/degenerate%20art</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Social Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Social Control of Propaganda:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.britannica.com/topic/propaganda/Social-control-of-propaganda\">https://www.britannica.com/topic/propaganda/Social-control-of-propaganda</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">RTL:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.rtl-west.de/beitrag/artikel/propaganda-im-netz\">https://www.rtl-west.de/beitrag/artikel/propaganda-im-netz</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Spiegel about Springer:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.spiegel.de/spiegel/print/d-45226315.html\">https://www.spiegel.de/spiegel/print/d-45226315.html</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">The Problem of Inferiority:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">An Introduction to the Problem of Unemployment:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://content.wisestep.com/unemployment-causes-effects-solutions\">https://content.wisestep.com/unemployment-causes-effects-solutions</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">Why is it a Problem:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.reference.com/world-view/unemployment-problem-8b16a870a6db2796\">https://www.reference.com/world-view/unemployment-problem-8b16a870a6db2796</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">A Theory of Unemployment:</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://www.nber.org/chapters/c1180.pdf\">https://www.nber.org/chapters/c1180.pdf</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\"><a href=\"https://commons.wikimedia.org/wiki/File:Anti-socialist_propaganda_WWI_(cropped).jpg\">https://commons.wikimedia.org/wiki/File:Anti-socialist_propaganda_WWI_(cropped).jpg</a></p>\n<p style=\"margin-bottom: 0cm; line-height: 100%;\">&nbsp;</p>",
        "topics": [
            {
                "id": 96,
                "name": "Contemporary",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 390,
                "name": "Manifesto",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 389,
                "name": "New-art",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17628,
            "forum_user": {
                "id": 17624,
                "user": 17628,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6389f37aeaee190f92e385b6a9b395f6?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "creco",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-manifesto-of-new-art-ii",
        "pk": 643,
        "published": false,
        "publish_date": "2020-04-25T16:57:34.805117+02:00"
    },
    {
        "title": "Mixed Music Interpretation Workshop - ManiFeste-2023 academy",
        "description": "This workshop is intended for composers, sound engineers, or computer music designers who wish to acquire the experience of a professional situation in the musical, technical, and logistical preparation of rehearsals and a mixed music concert. It will allow students to acquire the necessary techniques to master and ensure the performance of the electronic parts of a mixed music work.",
        "content": "<p style=\"text-align: left;\"><strong>June 19-July 1, 2023, Paris, France</strong></p>\r\n<p style=\"text-align: left;\"><em>ManiFeste, the IRCAM multidisciplinary festival and academy, is a gathering of creative artists in Paris, combining music with other disciplines: theater, dance, digital arts, and visual arts. The ManiFeste academy allows for discoveries, discussions, and exchanges among the active participants and listeners, guest artists and composers, and partners. More details on: <a href=\"https://www.ircam.fr/manifeste/academie/\">www.ircam.fr</a></em><strong><br /><br /></strong></p>\r\n<p style=\"text-align: left;\"><strong>Educational Advisors: Simone Conforti, Johannes Regnier,&nbsp;computer music designers and professors at IRCAM</strong><br /><strong>Workshop taught in English,</strong> in association with the student musicians from the P&ocirc;le Sup&rsquo; 93 and students from the ENS Louis-Lumi&egrave;re</p>\r\n<p style=\"text-align: left;\"><img alt=\"Simone Conforti\" src=\"/media/uploads/user/6b3e8fe229bd6e79eec0162bd70bc7c5.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" /><br /><strong>This workshop is intended for composers, sound engineers, or computer music designers who wish to acquire the experience of a professional situation in the musical, technical, and logistical preparation of rehearsals and a mixed music concert.</strong> It will allow students to acquire the necessary techniques to master and ensure the performance of the electronic parts of a mixed music work. <br /><br />The two weeks, supervised by IRCAM instructors, will consist of group classes and hands-on work in the studio. The workshop will be based on specific themes in order to strengthen production methods for the performance of mixed music. They will also include an improvisation session (instrument and electronics). <br /><br />Computer-music design students will work in close collaboration with young performers from the P&ocirc;le Sup'93, and will present the program of works studied in concert. <br /><br />The two weeks will be structured around the following steps: <br />&bull; courses in the computer room and practical workshops in the studio (analysis of the musical writing of the pieces in relation to the electronic writing, setting up the material conditions of the concert, debugging, analysis of patches, methodology...) <br />&bull; Improvisation and electronics session allowing an artistic encounter in view of a close collaboration based on a double performance: the instrumental interpretation itself and that of the electronics <br />&bull; realization of a simulation for each piece of the program studied during a recording session with the performer in the Ircam studio <br />&bull; rehearsals (experimentation with the diffusion of the pieces, monitoring and observation of the performer's playing) <br />&bull; mixed music concert by the workshop's students who perform the electroacoustic parts of one of the works in the program, with the participation of the performers from P&ocirc;le Sup'93 <br /><br />Each work on the program will be assigned to a student: <br />&bull; <strong>Noriko Baba, <em>o&uuml; </em></strong>(2004),&nbsp;for clarinet and electronics, 10 min <br />&bull;<strong> Florent Caron-Darras, <em>Technotope</em></strong> (2019), for barytone saxophone and electronics, 12 min <br />&bull;<strong> Kevin Juillerat, <em>Pas de deux </em></strong>(2016),&nbsp;for guitar and electronics, 7 min <br />&bull; <strong>Malika Kishino, <em>&Eacute;closion</em> </strong>(2005),&nbsp;for harp and electronics, 12 min <br /><br />In addition, a piece from the IRCAM repertoire could serve as a common core of study for all participants.</p>\r\n<p style=\"text-align: left;\"><strong>APPLICATIONS</strong></p>\r\n<hr />\r\n<p style=\"text-align: left;\"><strong>No age limit. </strong><br /><strong>Applicants must be able to speak and understand English.</strong></p>\r\n<p style=\"text-align: left;\">Details and application online <a href=\"https://ulysses-network.eu/competitions/manifeste-2023-mixed/\">on ULYSSES Platform </a><br /><strong>Deadline for applications</strong>&nbsp;<strong>Tuesday, February 8, 2023, 4pm CEST&nbsp;</strong></p>",
        "topics": [
            {
                "id": 1098,
                "name": "academy",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 824,
                "name": "France",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1100,
                "name": "June 2023",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1097,
                "name": "mixed music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1099,
                "name": "Paris",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1096,
                "name": "workshop",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17721,
            "forum_user": {
                "id": 17716,
                "user": 17721,
                "first_name": "Natacha",
                "last_name": "Moenne-Loccoz",
                "avatar": "https://forum.ircam.fr/media/avatars/1517-IRCAM-MANIF19--VISUEL-0-TheHouse1-Web.jpg",
                "avatar_url": "/media/cache/83/72/8372e1d360cd768ede652baeed45a1fb.jpg",
                "biography": null,
                "date_modified": "2024-12-12T15:36:41.115903+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 206,
                        "forum_user": 17716,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "moennelo",
            "first_name": "Natacha",
            "last_name": "Moenne-Loccoz",
            "bookmarks": []
        },
        "slug": "mixed-music-interpretation-workshop",
        "pk": 2024,
        "published": true,
        "publish_date": "2023-01-23T12:44:10+01:00"
    },
    {
        "title": "Celestial Armillary and Ubiquitous Wave",
        "description": "Celestial Armillary and Ubiquitous Wave is a multimedia experience combining Spatial Audio and Virtual Reality experience that explores the cognition of sound and cosmology.",
        "content": "<p><strong><em>Celestial Armillary and Ubiquitous Wave </em></strong><span style=\"\">is a multimedia two-perspective (on-site/ virtual) experience of the same theme that explores the cognition of sound and the cosmos in a multisensory context.</span></p>\n<p>&nbsp;</p>\n<p><span style=\"\">This project is inspired by modern and ancient Chinese observational cosmology,&nbsp; we hope to translate the model through sound and visual language to create a new version&mdash; a passage that can link the past and now. In the 4th century B.C., Chinese ancients began to use the armillary sphere to measure and interpret celestial objects. It was used to construct perceptions of the external world. In this age of modern technology, astronomical data measurement and sonification are also iterating to explore the human-universe relationship. A new awareness of the universe is provoked by utilizing Higher-Order Ambisonics(HOA) sound experience and Virtual Reality experience. These two experiences perform in parallel and create a mirror heterotopia.</span></p>\n<p>&nbsp;</p>\n<p><span style=\"\"><img alt=\"\" src=\"/media/uploads/user/4e34f3646a5dec0e54c1583520410748.jpg\"></span></p>\n<p>&nbsp;</p>\n<p><span style=\"\">The first spatial sound experience transports the audience to the center of a giant armillary in space. At the same time, the moving image creates a &ldquo;remeasurement&rdquo; of the Asian Astronomical map and responds to the experimental music. The rotation of the giant armillary sphere accompanies with various Chinese instruments, such as the Zither, Flute, Xun, Drums, etc. Starting from the Sun, it will take the audience on a slow Astronomical sound wave journey through the nine planets.</span></p>\n<p><span style=\"\"><img alt=\"\" src=\"/media/uploads/user/63bd20f53360b3fd8b3585bd82e0f385.png\"></span></p>\n<p><span style=\"\">In our second virtual reality experience, the ambisonic sound and interactive experience immerse people into a world of space measurement. By 6Dof, each step the audience takes changes the armillary sphere and its distance from the planets. The audience is encouraged to use their entities in the virtual space to measure the space-time transformation of the universe. Through the perception of spatial changes in the armillary sphere and the planet, this experience amplifies the sense of embodiment.</span></p>\n<p>&nbsp;</p>\n<p><span style=\"\">Playing a role as a key, the multi-sensory experience will open the threshold to a cosmic archaeological experience of the universe for the audiences.</span></p>",
        "topics": [
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 301,
                "name": "Virtual reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 27090,
            "forum_user": {
                "id": 27063,
                "user": 27090,
                "first_name": "Cainy",
                "last_name": "Yiru Yan",
                "avatar": "https://forum.ircam.fr/media/avatars/IMG_0761_%E5%89%AF%E6%9C%AC.JPG",
                "avatar_url": "/media/cache/63/8f/638f3b80b67e2eeb3a0e7ab0f789aaa6.jpg",
                "biography": "Cainy Yiru Yan is a London-based interdisciplinary artist and immersive experience designer. Her practice spans extended reality (XR), audiovisual installations, sculptural practices, spatial sound, photography, film, documentary, digital art, live performances, and art prints. She explores overlooked narratives through post-existentialist thought, holistic systems, and Daoist philosophy, creating environments that dissolve the boundaries between materiality, spirituality, temporality, and human experience. Grounded in these philosophical foundations, her work investigates the fluid and interdependent relationships between space, material, memory, and human perception. Rather than imposing narratives, she invites audiences to encounter environments where decay and renewal, stillness and transformation, coexist. Through immersive technologies, spatial atmospheres, and multi-sensory experiences, Cainy crafts poetic spaces that invite audiences to engage with the invisible layers of memory, nature, and transformation. Her work has been exhibited internationally at venues such as IRCAM at the Centre Pompidou (FR), Kühlhaus Berlin (DE), the Royal Birmingham Society of Artists (UK), Flor",
                "date_modified": "2025-05-03T18:20:29.531747+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "yanyiru",
            "first_name": "Cainy",
            "last_name": "Yiru Yan",
            "bookmarks": []
        },
        "slug": "celestial-armillary-and-ubiquitous-wave-1",
        "pk": 2040,
        "published": false,
        "publish_date": "2023-02-06T19:15:49.486131+01:00"
    },
    {
        "title": "Fragments de l’extinction",
        "description": "Résidence en recherche artistique 2017.18.\r\nDavid Monacchi.\r\nEn collaboration avec l’équipe Espaces acoustiques et cognitifs de l’Ircam-STMS et le ZKM.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\"></h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>R&eacute;sidence en recherche artistique 2017.18</h3>\r\n<p><strong>Fragments de l&rsquo;extinction : espaces ambisoniques d&rsquo;exploration et de composition pour la pr&eacute;servation d&rsquo;&eacute;cosyst&egrave;mes</strong><br />En collaboration avec l&rsquo;&eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac/\">Espaces acoustiques et cognitifs</a><span>&nbsp;</span>de l&rsquo;Ircam-STMS et le ZKM.</p>\r\n<p>Utilisant les technologies d&rsquo;enregistrement tridimensionnel avanc&eacute;es, exp&eacute;riment&eacute;es dans des contr&eacute;es perdues des for&ecirc;ts &eacute;quatoriales, ce projet &agrave; long terme, &laquo; Fragments de l&rsquo;extinction &raquo;, veut rassembler, &eacute;tudier et diffuser les paysages sonores d&rsquo;une biodiversit&eacute; encore intacte dans le but de la pr&eacute;server. Cette proposition de r&eacute;sidence de recherche artistique se concentrera sur les donn&eacute;es collect&eacute;es lors des enregistrements de terrain, dans les coins vierges de l&rsquo;Amazonie et de Born&eacute;o. Elle explorera la complexit&eacute; sonore de ces &eacute;cosyst&egrave;mes. Le but est de d&eacute;montrer comment, &agrave; partir de l&rsquo;enregistrement d&rsquo;espace encore pr&eacute;serv&eacute;, on peut extraire des donn&eacute;es utiles pour des explorations bioacoustiques et &eacute;cologiques, et parall&egrave;lement, construire un alphabet d&rsquo;un nouveau langage &laquo; musical &raquo; fond&eacute; sur l&rsquo;analyse et la recomposition d&rsquo;un v&eacute;ritable &eacute;cosyst&egrave;me. Plus pr&eacute;cis&eacute;ment, des entit&eacute;s individuelles seront isol&eacute;es au sein d&rsquo;un habitat acoustiquement vierge par leurs observations dans les domaines temporel, fr&eacute;quentiel et spatial. Dans un second temps, la complexit&eacute; de ces authentiques &eacute;cosyst&egrave;mes sera restitu&eacute;e &agrave; partir de ses caract&eacute;ristiques sph&eacute;riques dans le contexte d&rsquo;une restitution ambisonique d&rsquo;ordre &eacute;lev&eacute;.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\">David Monacchi</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202018/.thumbnails/david_monacchi.jpg/david_monacchi-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>David Monacchi (Italie, 1970) est chercheur, capteur de paysages sonores et compositeur &eacute;co-acoustique. Son projet multidisciplinaire, &laquo; Fragments de l&rsquo;extinction &raquo;, entrepris depuis quinze ans, repose sur une recherche de terrain, dans les derni&egrave;res r&eacute;gions de la for&ecirc;t pluviale primaire &eacute;quatoriale. Ayant re&ccedil;u de multiples r&eacute;compenses internationales, David Monacchi &eacute;tudie une nouvelle approche compositionnelle construite autour d&rsquo;enregistrements 3D de paysages sonores au sein d&rsquo;&eacute;cosyst&egrave;mes vierges, ayant pour but de focaliser le discours sur la crise de la biodiversit&eacute; par le m&eacute;dium musical et gr&acirc;ce &agrave; des installations sonores. B&eacute;n&eacute;ficiaire d&rsquo;une bourse Fulbright, il a enseign&eacute; &agrave; l&rsquo;universit&eacute; de Berkeley et, depuis 2000, &agrave; l&rsquo;universit&eacute; de Macerata. Il est actuellement professeur d&rsquo;&eacute;lectroacoustique au conservatoire de Pesaro. Il a travaill&eacute; vingt-cinq ans dans des domaines interdisciplinaires, tout particuli&egrave;rement en Europe et en Am&eacute;rique du Nord. Il a r&eacute;alis&eacute; des &oelig;uvres de musique contemporaine, des installations, cin&eacute;ma, vid&eacute;o-art, art de la situation. Il d&eacute;tient un brevet international et il est le fondateur de plusieurs r&eacute;seaux artistiques et scientifiques.</p>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://www.fragmentsofextinction.org/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.fragmentsofextinction.org/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 32,
                "name": "Recherche Artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 30,
                "name": "Recherche Musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 44,
                "name": "Résidence",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "fragments-de-lextinction",
        "pk": 29,
        "published": true,
        "publish_date": "2019-03-21T17:20:31+01:00"
    },
    {
        "title": "Rare-Earth - Matt DIXON",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p><span>With our demands for electronic and computer-driven technology intensifying, rare-earth mining, and the invasive means of extracting the elements required to produce the devices which enable almost every facet of our lives, is the cause of huge environmental impact, including severe ground and air pollution, and extensive and indelible scarring of the earth&rsquo;s landscape. As consumers of the end products, we are all complicit. </span><br /><br /><span>Via absurdist commentary, generated by a recurrent neural network processing an extensive dataset of online consumer (electronic) product reviews, and a generative sound design composed in real-time, &lsquo;Rare-Earth&rsquo; seeks to question our drive to consume these products, and reminds us that rather than seeing technology as an extension of ourselves we might see technology as made of the raw materials of the earth. &nbsp;</span></p>",
        "topics": [
            {
                "id": 1178,
                "name": "consumerism",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 920,
                "name": "landscape",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1176,
                "name": "Mining",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 149,
                "name": "Technology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32545,
            "forum_user": {
                "id": 32497,
                "user": 32545,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/md.jpg",
                "avatar_url": "/media/cache/fe/1a/fe1a1c6c90f04810ec7750133f9bd9c2.jpg",
                "biography": "Formerly a Creative Director for the arts and culture sector, Matt has also lectured extensively in the UK and is a practising artist, with work held in private collections in the UK and U.S. He's currently a Digital Direction student at the Royal College of Art, London. \n\nMatt's current research explores sound, AI and ML, in the context of language and poetry, and considers how the absurd can heighten questions of meaning in a universe increasingly separate from our conscious experience.",
                "date_modified": "2023-09-11T11:58:32.259638+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "mattdixon",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 107,
                    "user": 32545,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "rare-earth",
        "pk": 2076,
        "published": true,
        "publish_date": "2023-02-21T12:41:51+01:00"
    },
    {
        "title": "FLUX ",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p>FLUX is an immersive spatial audio composition designed for IRCAM&rsquo;s 6 channel speaker setup. The work explores the relationship between rivers, cities and people, illustrating commonalities and differences of the perception of rivers across the world. Utilising recordings of a range of different people speaking about their personal experiences with rivers, FLUX brings attention to the significance of rivers in our memories, daily lives, and communities. &nbsp;</p>\n<p>The use of spatial audio allows the audience to experience a sense of geographical distance in a physical environment and illustrates the interconnectedness of bodies of water.&nbsp;</p>",
        "topics": [
            {
                "id": 1211,
                "name": "narrative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 900,
                "name": "spatialaudio ",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32914,
            "forum_user": {
                "id": 32866,
                "user": 32914,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/252dc56234a90fefac37a898cb2452f4?s=120&d=retro",
                "biography": "Kristina Kapilin is a Faroese/Danish sound artist and designer based in London. Her multidisciplinary practice investigates the way in which stories, traditions and myths are embedded in our experience of ecologies, places, environments and histories. Often drawing upon field recordings in her compositions, Kristina’s work explores layers of perception and reality, that reference magical realism, surrealism and psychology.\n\nKristina Kapilin holds a BA in English and Digital Design from Aarhus University (2016) and a BA in Performance: Design and Practice from Central Saint Martins (2020) where she graduated with First Class Honours. She is currently studying for an MA in Digital Direction at the Royal College of Art.",
                "date_modified": "2023-03-22T18:18:36+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "kristinakapilin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "flux",
        "pk": 2160,
        "published": false,
        "publish_date": "2023-03-25T15:34:21.468039+01:00"
    },
    {
        "title": "Moving Towards Synchrony",
        "description": "Moving Towards Synchrony is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.",
        "content": "<p>This is a link to the video presentaton:&nbsp;<br /><a href=\"https://vimeo.com/514333273\">https://vimeo.com/514333273&nbsp;</a></p>\r\n<p><strong>Introduction:</strong></p>\r\n<p>My name is Johnny Tomasiello and I am a multidisciplinary artist and composer, living and working in New York.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>My piece, titled <em>Moving Towards Synchrony, version 3, </em>is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.</p>\r\n<p>It investigates the neurological effects of modulating brain waves and their corresponding physiological effects by use of a Brain-Computer Music Interface, which allows for the sonification of the data captured by an electroencephalogram.</p>\r\n<p>The work presents an interactive computer-assisted compositional performance system that can teach participants how to influence a positive change in their own physiology by learning to influence the functions of the autonomic nervous system through neuro- and bidirectional feedback.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>The methodology involves collecting physiological data through non invasive neuroimaging. A subject&rsquo;s brainwaves are used to generate realtime interactive music compositions which are simultaneously experienced by that subject. The melodic and rhythmic content, are derived from, and constantly influenced by, the subject&rsquo;s EEG readings. A subject, focusing on the generative stimuli, will attempt to elicit a change in their physiological systems through their experience of the bidirectional feedback. The resulting physiological responses will be recorded and measured to determine the efficacy of using external stimuli to affect the human body both physiologically and psychologically.<br /><br />EEG brainwave data has shown high levels of success in classifying mental states [1], which affect &ldquo;autonomic modulation of the cardiovascular system&rdquo; [2], and there are existent studies investigating how music can influence a response in the autonomic nervous system. [3] It is with these phenomena in mind that this work was created.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>Increased activity in the alpha wave frequency range is &ldquo;usually associated with alert relaxation&rdquo;. [4] Methods intended to increase activity in the alpha wave frequency range through feedback, autogenic meditation, breathing exercises, and other techniques, is called alpha training.</p>\r\n<p>Positive changes in alpha is what I am primarily concerned with here, since research has shown that stimulating activity within alpha causes muscle relaxation, pain reduction, breathing rate regulation, and decreased heart rate. [4] [5] [6] It has also been used for reducing stress, anxiety and depression, and can encourage memory improvements, mental performance, and aid in the treatment of brain injuries.</p>\r\n<p>In addition to investigating these neuroscience concerns, this work is designed to explore the validity of using the scientific method as an artistic process. The methodology will be to create an evidence-based system for the purpose of developing research based projects. This will limit, initially, subjective interpretation of the work and will encourage a mindful and intentional interaction with the experience itself. What is learned will determine the value of the work.</p>\r\n<p>As Gita Sarabhai expressed to John Cage \"...music conditions one's mind, leading to &lsquo;moments in [one's] life that are complete and fulfilled&rsquo;&rdquo; [5]. Music, in this case, can also be used by the mind to condition one's body.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Information on EEG:</strong></p>\r\n<p>An electroencephalogram (also know as an EEG) is an electrophysiological monitoring method used to record the electrical activity of the brain. A typical adult human EEG signal is between 10 and 100 &micro;V (microvolts) in amplitude when measured from the scalp. It was invented by German psychiatrist Hans Berger in 1929 and research into how brainwaves can be interpreted and modulated started as shortly thereafter.<span class=\"Apple-converted-space\">&nbsp; </span>Using an EEG, you are able to directly measure neural activity and capture cognitive processes in real time. Berger proved that alpha waves (also know as Berger waves) were generated by cerebral cortical neurons.</p>\r\n<p>In 1934, English physiologists Edgar Adrian and Brain Matthews first described the sonification of alpha waves derived from EEG data. [8] They found that &ldquo;non-visual activities which demand the entire attention (e.g. mental arithmetic) abolish the waves; sensory stimulation which demand attention also do so&rdquo; [9], showing how concentration and thought processes affected activity in the alpha wave frequency range.</p>\r\n<p>The brain wave activity recorded in an EEG is a summation of the inhibitory and excitatory post synaptic potentials that occur across a neuronal membrane. [10]</p>\r\n<p>The measurements are taken by way of electrodes placed on the scalp.<span class=\"Apple-converted-space\">&nbsp; </span>The readings are&nbsp;divided into five frequency bands, delineating slow, moderate, and fast waves.<span class=\"Apple-converted-space\">&nbsp; </span>The bands, from slowest to fastest are:</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Delta</strong>, with a range from approximately 0.5Hz&ndash;4Hz,<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>which signifies deepest meditation or dreamless sleep</p>\r\n<p><strong>Theta</strong>, from approximately 4Hz&ndash;8Hz,<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>signifying meditation or deep sleep.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p><strong>Alpha</strong>, from approximately 8Hz&ndash;13Hz,<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>representing quietly flowing thoughts.</p>\r\n<p><strong>Beta</strong>, from approximately 13Hz&ndash;30Hz,<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>which is a normal waking state.</p>\r\n<p>And<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p><strong>Gamma</strong>, from approximately 30Hz&ndash;42Hz<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>which is most active during simultaneous processing of information that engages multiple different areas of the brain.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><strong>History of EEG use in music:</strong></p>\r\n<p>Physicist Edmond Dewan began the study of brainwaves in the early 1960s and developed a &lsquo;brainwave control system&rsquo;.<span class=\"Apple-converted-space\">&nbsp; </span>The system detected changes in alpha rhythms which were used to turn lighting on or off. &ldquo;The light could also be replaced by &lsquo;an audible device that made a beep when switched on&rsquo;, allowing Dewan to spell out the phrase &lsquo; <em>I can talk</em> &rsquo; in Morse code&rdquo;. [8] Dewan met experimental composer Alvin Lucier which inspired the first actual brainwave composition.</p>\r\n<p>Alvin Lucier first performed <em>Music For Solo Performer</em> in 1965. It involved the composer sitting in a chair on stage, with his eyes closed while his brainwaves were recorded.<span class=\"Apple-converted-space\">&nbsp; </span>The data from the recording was amplified and distributed to speakers set up around the room.<span class=\"Apple-converted-space\">&nbsp; </span>The speakers were placed against different types of percussion instruments, so the vibration of the speakers would cause the instrument to sound.</p>\r\n<p>Lucier was able to control the percussion events through control of his cognitive functions, and found that a break in concentration would disrupt that control.<span class=\"Apple-converted-space\">&nbsp; </span>Although mastery over the alpha rhythm was (and is) difficult, <em>Music for the solo performer</em> greatly contributed to the field of experimental music and illustrated the depth of possibility in using EEG control over musical performance.</p>\r\n<p>Computer scientist Jaques Vidal published the paper <em>Toward Direct Brain-Computer Communication </em>in 1973, which first proposed the Brain-Computer Interface (BCI), which is a means of using the brain to control external devices.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>This was the very beginning of BCMI research, which has evolved into an interdisciplinary field of study &ldquo;at the crossroads of music, science and biomedical engineering&rdquo; [11]. BCMIs (also referred to Brain Machine Interfaces, or BMIs) are still in use today, and the field of research around them is in its infancy.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Project Overview:</strong></p>\r\n<p>This project records EEG signals from the subject using four non-invasive dry extra-cranial electrodes from a commercially available MUSE EEG headband. Measurements are recorded from the TP9, AF7, AF8, and TP10 electrodes, as specified by the International Standard EEG placement system, and the data is converted to absolute band powers, based on the logarithm of the Power Spectral Density (PSD) of the EEG data for each channel. Heart rate data is obtained through PPG measurements, although that data is not used in the current version of this project. EEG measurements are recorded in Bels/Db to determine the PSD within each of the frequency ranges.</p>\r\n<p>The EEG readings are translated into music in real time, and the subjects are instructed to employ deep breathing exercises while they focus on the musical feedback. <br /><br />Great care was taken in defining the compositional strategies of the interactive content in order to deliver a truly generative composition that was also capable of producing musically recognizable results.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>All permutations of the scales, modes and chords being used, as well as rhythms, and performance characteristics, needed to be considered beforehand so the extraction of a finite set of parameters from the EEG data set could be parsed and used to produce a well-formed and dynamic piece of music.</p>\r\n<p>There are 3 main sections of this Max patch:</p>\r\n<p>1: The <strong>EEG data capture</strong> section.</p>\r\n<p>2: The <strong>EEG data conversion</strong> section.</p>\r\n<p>3: the<strong> Sound generation and DSP</strong> section.</p>\r\n<p>The <strong>EEG data capture</strong> section receives EEG data from the Muse headband, which is converted to OSC data and transmitted over WiFi via the iOS app Mind Monitor.<span class=\"Apple-converted-space\">&nbsp; </span>That data is then split into the five separate brainwave frequency bandwidths: delta, theta, alpha, beta and gamma.<span class=\"Apple-converted-space\">&nbsp; </span>Additional data is also captured, including accelerometer, gyroscope, blink and jaw clench, in order to control for any artifacts in the data capture.<span class=\"Apple-converted-space\">&nbsp; </span>Sensor connection data is used to visualize the integrity of the sensor&rsquo;s attachment to the subject. PPG data is also captured for use in a future iteration of the project.</p>\r\n<p>The <strong>EEG data conversion</strong> section accepts the EEG bandwidth data representing specific event-related potential, and translates it to musical events.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>First, significant thresholds for each brainwave frequency bandwidth are defined.<span class=\"Apple-converted-space\">&nbsp; </span>These are chosen based on average EEG measurements taken prior to the use of the musical feedback. When those thresholds are reached or exceeded, an event is triggered.<span class=\"Apple-converted-space\">&nbsp; </span>Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics.</p>\r\n<p>&nbsp;</p>\r\n<p>This section is comprised of three subsections that format their data output differently, depending on the use case: <br />1. <strong>Internal Sound Generation and DSP</strong> for use completely within the Max environment.</p>\r\n<p>2. <strong>External MIDI</strong> for use with MIDI equipped hardware or software.</p>\r\n<p>and<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>3. <strong>External Frequency</strong> <strong>and gate</strong>, for use with modular synthesizer hardware.</p>\r\n<p>Each of these can be used separately or simultaneously, depending on the needs of the piece.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>For the data conversion, the event-related potentials are mapped in the following way:<br />Changes in <strong>alpha</strong>, relative to the predefined threshold, govern the triggering of notes, as well as the scale and mode.</p>\r\n<p>Changes in <strong>theta</strong>, relative to the threshold, influence note value.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>Changes in <strong>beta</strong>, relative to the threshold, influence spatial qualities like reverberation and delay.</p>\r\n<p>Changes in <strong>delta</strong>, relative to the threshold, influence the degree of spatial effects.</p>\r\n<p>Changes in <strong>gamma</strong>, relative to the threshold, influence timbre.</p>\r\n<p>Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis or set of standards.</p>\r\n<p>The third section is <strong>Sound generation and DSP</strong>. It is responsible for the sonification of the data translated from the <strong>EEG data conversion</strong> section. This section includes synthesis models, timbre characteristics, and spatial effects.</p>\r\n<p>This projects uses three synthesized voices created in Max 8 for the generative musical feedback.<span class=\"Apple-converted-space\">&nbsp; </span>There are two subtractive voices that each use a mix of sine, sawtooth and triangle waves, and one fm voice. <span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>The timbral effects employed are waveform mixing, frequency modulation, and high pass, band pass and low pass filters. The spatial effects used include reverberation, and delay.<span class=\"Apple-converted-space\">&nbsp; </span>In addition to the initial settings of the voices, each of the timbral and spatial effects are modulated by separate event-related potential data captured by the EEG.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Conclusions:</strong></p>\r\n<p>&nbsp;</p>\r\n<p>This project is a contemporary interpretation of an idea I've been interested in for many years, starting with investigation into bidirectional EKG biofeedback.<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>My initial experience with the subject was during a university degree in psychophysics (a branch of psychology). Some promising research at the university focused on reducing stress in asthmatic subjects for the purposes of lessening the frequency of attacks. [12]</p>\r\n<p>At the time, the technology required to explore this idea was of considerable size, and prohibitively expensive, for all but medical or formally funded academic purposes. With the current availability of low-cost electroencephalography (EEG) devices and heart rate monitors, the possibility of autonomous exploration of these concepts has become a reality.</p>\r\n<p>The procedure, when using this work for the exploration of the physiological effects of neuro- and bi-directional feedback, starts with obtaining and comparing 2 data sets: a control and a therapeutic data set.<span class=\"Apple-converted-space\">&nbsp; </span>The control set records EEG data without utilizing musical feedback or breathing exercises.<span class=\"Apple-converted-space\">&nbsp; </span>The therapeutic set records EEG data with the feedback and breathing exercises.</p>\r\n<p>&nbsp;</p>\r\n<p>Although this project is primarily concerned with changes in the alpha EEG brainwave frequency range, changes in other frequency ranges were used to trigger events in the feedback. This approach was adopted to ensure that a subject&rsquo;s loss of focus (and/or a drop in the PSD of alpha) would not negatively affect the generation of novel musical feedback, and with the help of consistent feedback, the subject would be able to return their focus and continue. Depending on the subject&rsquo;s state of relaxation (and the PSD of the other four EEG frequency ranges measured), the performance and phrasing of the musical feedback would change in such a way as to encourage greater focus.</p>\r\n<p>For the initial proof of concept trials, I tested myself and a small sampling of other subjects. Preliminary data shows that alpha readings were higher, on average, during the therapeutic phase.<span class=\"Apple-converted-space\">&nbsp; </span>Also, a higher overall peak value was achieved during the therapeutic phase This suggests that this feedback model is an effective way of increasing activity in the alpha brainwave frequency range, which is the beneficial physiological and psychological effect I was hoping to find, although much more data needs to be collected before any definitive conclusions can be drawn. At this point, the system has been tested and is functional, and further research can begin. The modular design of the work allows for most any variable to be included or excluded, which will be necessary moving forward with the research, in order to more thoroughly test the foundational elements of the thesis, as well as any musicological exploration and analysis that defining the feedback raises.<span class=\"Apple-converted-space\">&nbsp; </span><br /><br />In the meantime, I am already using the software as a compositional system to create recorded works and live soundtracks. I am also planning to mount the project as an interactive installation in a gallery setting.</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Contact Details:</strong></p>\r\n<p>&nbsp;</p>\r\n<p>Johnny Tomasiello<br /><br /><a href=\"mailto:johnnytomasiello@gmail.com\">johnnytomasiello@gmail.com</a><br /><br /></p>\r\n<p>&nbsp;</p>\r\n<p><strong>Credits &amp; Acknowledgments:</strong></p>\r\n<p>IRCAM</p>\r\n<p>Cycling &rsquo;74</p>\r\n<p>Carol Parkinson, Executive Director of Harvestworks</p>\r\n<p>Melody Loveless, NYU &amp; Max certified trainer</p>\r\n<p>Dr. Paul M. Lehrer and Dr. Richard Carr</p>\r\n<p>InteraXon Muse electroencephalography headband<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>James Clutterbuck (Mind Monitor developer)</p>\r\n<p>&nbsp;</p>\r\n<p><strong>References:</strong></p>\r\n<p>&nbsp;</p>\r\n<p><strong>[1] &ldquo;Mental Emotional Sentiment Classification with an EEG-based Brain-Machine Interface.&rdquo;<span class=\"Apple-converted-space\">&nbsp;</span></strong></p>\r\n<p>Bird, Jordan J.; Ekart, Aniko; Buckingham, Christopher D.; Faria, Diego R., 2019</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[2] &ldquo;Effects of mental state on heart rate and blood pressure variability in men and women.&rdquo;<span class=\"Apple-converted-space\">&nbsp;</span></strong></p>\r\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Madden+K&amp;cauthor_id=8590551\">K Madden</a>&nbsp;,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Savard+GK&amp;cauthor_id=8590551\">G K Savard</a>, 1995</p>\r\n<p>&nbsp;</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[3] &ldquo;How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness?&rdquo;<span class=\"Apple-converted-space\">&nbsp;</span></strong></p>\r\n<p>Francesco Riganello,* Maria D. Cortese, Francesco Arcuri, Maria Quintieri, and Giuliano Dolce, 2015</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[4] Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications</strong></p>\r\n<p><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Marzbani%20H%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Hengameh Marzbani</strong></a><strong>, </strong><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Marateb%20HR%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Hamid Reza Marateb</strong></a><strong>,</strong> <strong>and </strong><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Mansourian%20M%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Marjan Mansourian</strong></a><strong>,</strong><strong> 2016</strong></p>\r\n<p>&nbsp;</p>\r\n<p><strong>[5] Stress Management Techniques: Are They All Equivalent, or Do They Have Specific Effects?</strong></p>\r\n<p>Paul M. Lehrer and Richard Carr, 1994</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[6] Alpha activity and cardiac correlates: three types of relationships during nocturnal sleep</strong></p>\r\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Ehrhart+J&amp;cauthor_id=10802467\">J Ehrhart</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Toussaint+M&amp;cauthor_id=10802467\">M Toussaint</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Simon+C&amp;cauthor_id=10802467\">C Simon</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Gronfier+C&amp;cauthor_id=10802467\">C Gronfier</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Luthringer+R&amp;cauthor_id=10802467\">R Luthringer</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Brandenberger+G&amp;cauthor_id=10802467\">G Brandenberger</a>, 2000</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[7] &ldquo;A Composer's Confessions\"<span class=\"Apple-converted-space\">&nbsp;</span></strong></p>\r\n<p>John Cage, 1948<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>&nbsp;</p>\r\n<p><strong>[8] Brainwaves in concert: the 20th century sonification of the electroencephalogram<br /></strong>Bart Lutters, Peter J. Koehler, 2016<span class=\"Apple-converted-space\">&nbsp;</span></p>\r\n<p>&nbsp;</p>\r\n<p><strong>[9] The Berger Rhythm: Potential Changes From The Occipital Lobes in Man,<span class=\"Apple-converted-space\">&nbsp;</span></strong></p>\r\n<p>Adrian, Matthews.1934</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[10] How To Interpret an EEG and its Report</strong></p>\r\n<p>Marie Atkinson, MD, 2010</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[8] Brainwaves in concert: the 20th century sonification of the electroencephalogram<br /></strong>Bart Lutters, Peter J. Koehler, 2016</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[11] Brain-Computer Music Interfacing: Interdisciplinary Research at the Crossroads of Music, Science and Biomedical Engineering<br /></strong>Miranda, ER 2014</p>\r\n<p>&nbsp;</p>\r\n<p><strong>[12] Relaxation and Music Therapies for Asthma Among Patients Prestabilized on Asthma Medication</strong></p>\r\n<p>Paul Lehrer, Et al. 1994</p>",
        "topics": [
            {
                "id": 562,
                "name": "Bcmi",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 565,
                "name": "Biofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 561,
                "name": "Brain-computer music interface",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 559,
                "name": "Brainwaves",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 272,
                "name": "Generative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 564,
                "name": "Neurofeedback",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 563,
                "name": "Neuroscience",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 567,
                "name": "Psychophysics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 566,
                "name": "Research",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18362,
            "forum_user": {
                "id": 18355,
                "user": 18362,
                "first_name": "Johnny",
                "last_name": "Tomasiello",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4b62dafc53dcbf42b1b50f617668de0a?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-02-13T13:18:35.802851+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Johnny_Tomasiello",
            "first_name": "Johnny",
            "last_name": "Tomasiello",
            "bookmarks": []
        },
        "slug": "moving-towards-synchrony",
        "pk": 939,
        "published": true,
        "publish_date": "2021-03-15T14:58:37+01:00"
    },
    {
        "title": "32 Sensors",
        "description": "Program Notes for Ircam Forum at NYU Oct 2022",
        "content": "<p style=\"margin-bottom: 0cm;\"><a href=\"http://www.eigenklang.de/sensor32Press.jpg\" title=\"Sensor 32\">http://www.eigenklang.de/sensor32Press.jpg</a></p>\n<p style=\"margin-bottom: 0cm;\">The new sensor array allows the control of numerous parameters simultaneously in real time (polyphonic). This seems to me essential for improvisation. Hands or legs and the upper body can be used. IR distance sensors based on the triangulation principle generate analogue voltages that are transmitted to MIDI continuous controllers. The array features 32 up to 48 controllers. Most of them feed algorithms. For the player's orientation, LED bars with 10 to 20 display levels are placed close to the sensor.<br>In the first phase, Resynthese from NI Reaktor and Pianoteq modelling was used for the sonification.</p>\n<p style=\"margin-bottom: 0cm;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm;\">The mechanical construction in the manner of a construction kit allows the arrangement of the sensor boards in the 3 dimentional space to be adapted.<br>Potential: besides composition algorithms, or DY-ing, automata (I built some), spatial effects, light and video synthesizers could also be controlled. So we are looking at a kind of half-a-conductor (with the hardware made of semiconductors). Gesture recognition or AI are not planned, I prefer direct cause and effect access. I want to learn myself. Of course, a lot of configuration work has to be done for each composition. And rehearsal effort on the part of the performer.</p>\n<p style=\"margin-bottom: 0cm;\">&nbsp;</p>\n<p style=\"margin-bottom: 0cm;\">I do not start from the paradigm of universal gestures. Rather, the playing (ad hoc composing) of a complex instrument is imitated: operating an organ with hands and feet produces gestures as a side effect, but they always depend on the purpose of the sound production and the construction of the console. In this respect, a traditional approach that relates to highly developed performance technique. The benefit of my system is that it makes numerous parameters simultaneously available to computer algorithms of all kinds. Gestures also arise, of course, which is composed visualisation of sound.</p>",
        "topics": [
            {
                "id": 100,
                "name": "Sensor",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 31138,
            "forum_user": {
                "id": 31091,
                "user": 31138,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/KFGerber_Photo_SeverinVogl.JPG",
                "avatar_url": "/media/cache/17/7a/177ac0ae1bcc23c6ea05d909928da456.jpg",
                "biography": "He began playing the electric bass autodidactically. In 1975, he attended musicology lectures with Riethmüller in Freiburg as a guest student.\nAfter turning to jazz, he studied double bass with Adelhard Roidinger in Munich. He has a M.Sc. in physics from the LMU Munich.\nHe has performed live algorithmic performances, including a co-improvisation with the U of\nMichigan Dancers at the 1998 ICMC Michigan. This featured live formula editing, an anticipation of live coding.\n\"Beautiful Numbers\" was awarded the electronic \"Music for Dance\" award at Bourges.\nSince \"Loops\" for solo piano, he has also created works in traditional notation without electronics. \nAfter an invitation to the 2017 Kontakte Festival at the AdK Berlin, his \"computer music without\nloudspeakers\" has also attracted international interest. For example, in the Boston Berklee\nand South Korea, Seoul 2019.\nHis \"Violinautomat\" was selected for the World Music Days in Tallinn, Estonia. He received the \"Award of Distinction\" at Matera Intermedia 2020 in Italy and the Best Music Award of the CMMR, Tokyo.\nHis current projects are an automaton for alto recorder, a bowed psaltery with 16 bows, an extended snare, hammer zither.",
                "date_modified": "2022-09-06T16:43:03+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "kfg4",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "32-sensors",
        "pk": 1268,
        "published": false,
        "publish_date": "2022-08-26T22:57:51.718793+02:00"
    },
    {
        "title": "xp4l : a new flexible spatial sound system for Ableton Live",
        "description": "This article sums up the implementation, features and functionalities  of xp for live, a max for live based sound spatialization system for Ableton Live using Spat~library released these days by Eric Raynaud",
        "content": "<div class=\"page\" title=\"Page 1\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<h1 style=\"text-align: justify;\"><strong>About<img src=\"/media/uploads/user/fb91300d13af2dcbb7f240154f0d1f07.png\" alt=\"\" width=\"1504\" height=\"673\" /></strong></h1>\r\n<p style=\"text-align: justify;\"><a href=\"/projects/detail/xp4l/\">xp4l</a> is a fully integrated solution designed to expand with simplicity Ableton Live potential toward the field of spatial sound performance. The goal of&nbsp;<a href=\"/projects/detail/xp4l/\">xp4l</a> is to provide Ableton users with a flexible and simplified environement to create 3d audio projects.</p>\r\n<p><video width=\"300\" height=\"150\" style=\"display: block; margin-left: auto; margin-right: auto;\" controls=\"controls\">\r\n<source src=\"https://www.dropbox.com/s/volkhi41p35mrxg/out.mp4?raw=1\" /></video></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p>xp refers to both 'e<strong><em>XP</em></strong>and' and to the oral abbreviation for '<em>e<strong>XP</strong>erimental</em>' in French.</p>\r\n</div>\r\n</div>\r\n</div>\r\n<p style=\"text-align: justify;\">It is made of a free max-for-live suit consisting of 5 devices, and a standalone application that users have to purchase.</p>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<p style=\"text-align: justify;\"><img src=\"/media/uploads/user/93e2b4276f38707f95220d8cea228412.png\" alt=\"\" width=\"1504\" height=\"796\" /></p>\r\n<div class=\"page\" title=\"Page 1\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n<h3 style=\"text-align: justify;\"><strong>Background</strong></h3>\r\n<p style=\"text-align: justify;\">The xp project is initiated in 2019 durig <a href=\"https://forum.ircam.fr/article/detail/symbiosis/\">the artistic research residency at ircam</a> of <a href=\"https://www.ircam.fr/person/eric-raynaud/\">Eric Raynaud</a>. In partnership with the SAT of Montreal, and under the name of '<em>symbiosis</em>', this short residency had as objective to improve in a context of immersive audio-visual creation such as the 'Satosphere' interactions between spatialized sound and generative visual synthesis.</p>\r\n<p style=\"text-align: justify;\">From new perspectives and thoughts borned during this residency, and the occurrence of the pandemic, Eric Raynaud reoriented the project in a more systemic and creative approach, offering possibilities of democratization of spatial sound practice while making the link to the generative synthesis through a jitter context that is used to manage the geometric components of the sound field.</p>\r\n<h3 style=\"text-align: justify;\"><strong>Core</strong></h3>\r\n<p style=\"text-align: justify;\">The project is partly based on Ircam Spat~ library, the iconic max-msp object library unique for its ability to simulate virtual acoustic spaces and sound perception in 3d dimensions. Therefore,&nbsp;<a href=\"/projects/detail/xp4l/\">xp4l</a> is intended for past, new or future users of the Spat~ library. Although possible before&nbsp;<a href=\"/projects/detail/xp4l/\">xp4l</a> existed, accessing the library from Ableton Live in the max-for-live environment through a flexible format by users was made complex, and this fact deprived the majority of creators from taking advantage of this incredible tool. xp4l provides Ableton users with a solution that will alleviate this problem.</p>\r\n<p style=\"text-align: justify;\">On the other hand, a large part of the system relies on a specific standalone using the jitter library that replaces the usual spat gui, adding other functionnalities and offering promising potentials for further updates.</p>\r\n<p style=\"text-align: justify;\">All the system is dynamic and flexible up to 62 channels output (Ableton limitation). It doesn't requires any max coding and only a little configuration is expected from the users, so they can almost immediatly dedicate their time to content design.</p>\r\n<p style=\"text-align: justify;\"><a href=\"/projects/detail/xp4l/\">xp4l</a> should not be considered as a suite of tools but as an interconnected system that hybridizes Ableton turning it into a spatial sound processor, while keeping what makes its workflow unique: a <span style=\"font-size: 1.125rem;\">&nbsp;clever balance in the direction of artistic creativity and live spontaneity.</span></p>\r\n<h1 style=\"text-align: justify;\"><strong>Max For Live Devices</strong></h1>\r\n</div>\r\n<h3 style=\"text-align: justify;\"><strong>xp4l.visual</strong></h3>\r\n<p style=\"text-align: justify;\"><a href=\"https://www.xp4l.com/xp4l-visual/\"><span style=\"font-size: 1.125rem;\">https://www.xp4l.com/xp4l-visual/</span></a></p>\r\n<p style=\"text-align: justify;\"><img src=\"/media/uploads/user/647e92a8a686a65f8248e329fc73dde3.jpg\" alt=\"xp4l.visual\" width=\"1173\" height=\"374\" /></p>\r\n<p style=\"text-align: justify;\"><strong>Features</strong> :It works as remote controler for the standalone from Ableton&nbsp; : Full synchronization with xp4l.app (open and close the standalone), choose among 4 different camera mode, customize 3d elements of the scene, trigger hi-res screen shot of the 3d scene, take hi-res screen-shots of the scene, audio-reactive cue, save &amp; load configuration as user preset</p>\r\n<h3 style=\"text-align: justify;\"><strong>xp4l.engine</strong></h3>\r\n<p style=\"text-align: justify;\"><a href=\"https://www.xp4l.com/xp4l-engine/\">https://www.xp4l.com/xp4l-engine/</a></p>\r\n<p style=\"text-align: justify;\">The xp4l.engine device is an important piece of xP. Basically, with a dynamic architecture, this device gives access to Ircam spat~ capabilities inside the Ableton/max-for-live environnement. It works as a multichannel bus, so spatial audio is processed through this devices.</p>\r\n<p style=\"text-align: justify;\"><img src=\"/media/uploads/user/59280eaf3d93d2ba7bfa51bf017646b3.png\" alt=\"xp4l.engine\" width=\"1766\" height=\"375\" /></p>\r\n<p style=\"text-align: justify;\"><strong>Features :</strong> Input/output monitoring, play as a sound field stream or in Binaural (kemar), unlimited configuration up to 62 channels (Ableton limitation), dynamic autofilling output routing channel, factory layout presets, load and save custom layout as user preset, spat&nbsp;spatialization type supported (angular, vbap2d/3d, hoa2d/3d etc...), 4 virtual rooms, dynamic output routing, audio tester, record in multichannel format, hoa components and playback recorded files.</p>\r\n<p style=\"text-align: justify;\">From this device, it's possible also to record the current multichannel audio stream as multichannel interleaved file, and hoa encoded stream (in the case of hoa spatialization type). The process is straight and simple, and allows to capture improvised work as precisly written content as well.&nbsp;</p>\r\n<p><video width=\"300\" height=\"150\" style=\"display: block; margin-left: auto; margin-right: auto;\" controls=\"controls\">\r\n<source src=\"https://www.dropbox.com/s/jrvv7lu4midkvma/recording.mp4?raw=1\" /></video></p>\r\n<h3 style=\"text-align: justify;\"><strong>xp4l.source</strong></h3>\r\n<p style=\"text-align: justify;\"><a href=\"https://www.xp4l.com/xp4l-source/\">https://www.xp4l.com/xp4l-source/</a></p>\r\n<p>Drag and drop the device directly into Ableton to create a new track, or onto an already existing track. In both cases, a new source&nbsp; is automatically created in the 3d environnement.</p>\r\n<p>From this device, user can configure the perception of the source through a large number of parameters. It also offers a unique animation panel with 4 generative engine to move sound sources in space.</p>\r\n<p style=\"text-align: justify;\"><strong><img src=\"/media/uploads/user/dc91ece72e93a613c23a22a87441396d.png\" alt=\"xp4l.source\" width=\"1796\" height=\"247\" /></strong></p>\r\n<p style=\"text-align: justify;\"><strong>Features</strong> : Up to 16 sources (for now), dynamic instancing, full integration to Ableton workflow, cartesian or Polar system positioning method, perception parameters of Ircam Spat~, load &amp; save configuration as user preset, 4 modes generative engine for position animation in the 3d space, load &amp; save animation parameters as user preset, group and room dynamic assignation on the fly, customizable appearance and naming in the 3d scene, audio-reactive wave form in the 3d scene</p>\r\n<h3 style=\"text-align: justify;\"><strong>xp4l.room</strong></h3>\r\n<p><a href=\"https://www.xp4l.com/xp4l-room/\">https://www.xp4l.com/xp4l-room/</a></p>\r\n<p style=\"text-align: justify;\">In Spat~ the \"Room\" &nbsp;is an artificial reverberator allowing room effect synthesis and control in real time, based on digital signal processing algorithms. With xp, thanks to Ircam-spat library, it is possible to create a maximum of 4 rooms, reverberating acoustic spaces whose spatial diffusion of sources will adopt the properties. Each sources can be assigned to any of these 4 rooms. Each of these room parameters can be adjusted from this device. As for the other devices, drag and drop the device on a new track, and a room name is automatically attributed.</p>\r\n<p><img src=\"/media/uploads/user/471887d0afdf07dd26b0a1bf4b236f90.jpg\" alt=\"xp4l.room\" width=\"1280\" height=\"357\" /></p>\r\n<p style=\"text-align: justify;\">Features :&nbsp; up to 4 rooms, dynamic instancing in Ableton (Drag &amp; drop/ auto-naming), dynamic update with xp4l.source, all parameters of Ircam Spat~ room module, fully exposed to Ableton workflow, save &amp; load configuration as user preset</p>\r\n<h3 style=\"text-align: justify;\"><strong>xp4l.group</strong></h3>\r\n<p><a href=\"https://www.xp4l.com/xp4l-group/\">https://www.xp4l.com/xp4l-group/</a></p>\r\n<p style=\"text-align: justify;\">Very flexible, it allows you to quickly transform in multitude playful ways the sound field by warping the geometric components of the sources gathered in groups.</p>\r\n<p style=\"text-align: justify;\">In the xp4l paradigm, a group is an organized subset within a geometric hierarchy. By default, all the sources created belong to the group zero corresponding to the top hierarchical level. It is then possible to create child groups of this level in which the sources might be assigned. xp4l.group allows to modify the spatial properties of these groups, and therefore to interact on swarms of sound sources simultaneously.</p>\r\n<p><img src=\"/media/uploads/user/09fc89ee0dedb79f496b45f1c1f16c22.png\" alt=\"xp4l.group\" width=\"1796\" height=\"324\" /></p>\r\n<div class=\"page\" title=\"Page 2\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><strong>Features :</strong> create up to 4 group under the main hierarchy, dynamic instancing in Ableton (Drag &amp; drop/ auto-naming), dynamic update with xp4l.source, easy source assignation, 3 modes of transformation : translate, rotate, scale, generative transformation with procedural functions, fully exposed to Ableton workflow, save &amp; load configuration as user preset</p>\r\n<p>xp4l.group in action:</p>\r\n<p><video width=\"300\" height=\"150\" style=\"display: block; margin-left: auto; margin-right: auto;\" controls=\"controls\">\r\n<source src=\"https://www.dropbox.com/s/xlrl56crzynia3f/group%232.mp4?raw=1\" /></video></p>\r\n<h1 style=\"text-align: justify;\">Standalone</h1>\r\n<div class=\"page\" title=\"Page 2\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p><a href=\"https://www.xp4l.com/xp4l-app/\">https://www.xp4l.com/xp4l-app/</a></p>\r\n<p><img src=\"/media/uploads/user/e16e7aaf2e63e73d616be4727ac39709.png\" alt=\"\" width=\"1504\" height=\"940\" /></p>\r\n<p style=\"text-align: justify;\">xp4l.app is max based built standalone that works in synchronization with the xp4l.devices. Although it's an independent application, it doesn't expect direct action from the user for launching. It has two functions in the system: representation and implementation, and both happened without user interaction which makes a project set up very easy and fast.</p>\r\n<p style=\"text-align: justify;\">Application must be authorized at first launched. The activation system is flexible enough to deactivate the current used system and activate it on another computer after that.</p>\r\n<div class=\"page\" title=\"Page 2\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p>For the user, it appears as a simple monitor with a 3d view which represents the virtual sound scene in a 3d world. <span style=\"font-size: 1.125rem;\">It is possible to navigate in that view with keyboard shortcuts and mouse, and choose among different camera position. </span><span style=\"font-size: 1.125rem;\">Underneath, the application allows the operation of each of the xp4l.devices with an organized architecture of messages though osc communication</span></p>\r\n<p><span style=\"font-size: 1.125rem;\"><video width=\"300\" height=\"150\" style=\"display: block; margin-left: auto; margin-right: auto;\" controls=\"controls\">\r\n<source src=\"https://www.dropbox.com/s/8ftfmtpsuq67g4j/spaceisnoise.mp4?raw=1\" /></video></span></p>\r\n<div class=\"page\" title=\"Page 2\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p>It also operates several features, notably messages management addressing the motion in space, which are all created from the application.This function is managed by OpenGl through the jitter library implementation in Maxmsp, and will give xp4l strong potential and perspective for further developments</p>\r\n<p><img style=\"display: block; margin-left: auto; margin-right: auto;\" src=\"/media/uploads/user/a5e2ce0dfc6fa4d2a7d4e0b300ab14ea.gif\" alt=\"\" width=\"374\" height=\"282\" /></p>\r\n<h1>Workflow</h1>\r\n<p>The video bellow illustrates a basic workflow instancing.&nbsp; The idea was to transpose a basic usual ableton workflow toward a 3d sound design. Sources created with xp4l.source become spatial sound sources which can take any ableton inputs : clips, audio signal,&nbsp; return track, other track, mid instruments etc. Combining ableton flexibility to xp offer a very intuitive playground to create spatial audio content, live performance, sound installation, and much more.</p>\r\n<p><video width=\"300\" height=\"150\" style=\"display: block; margin-left: auto; margin-right: auto;\" controls=\"controls\">\r\n<source src=\"https://www.dropbox.com/s/r8rzu1yxdrrkycm/runtime%231.mp4?raw=1\" /></video></p>\r\n<p>&nbsp;</p>\r\n<p>xp4l is available as a package with the standalone and the xp4l.devices bundle : <a href=\"https://www.xp4.com\">www.xp4.com</a></p>\r\n<p>Only available for macOS at the moment but Windows version is schedulled to be available in about two months.</p>\r\n<p>A demo version with several limitations is also available.</p>\r\n<p>In addition to window version, some cool features and improvements are already in the pipeline.</p>\r\n<p>Follow the journey here :</p>\r\n<p><a title=\"website\" href=\"https://www.xp4l.com/\">website</a></p>\r\n<p><a title=\"Facebook\" href=\"https://www.facebook.com/xp4live\">Facebook</a></p>\r\n<p><a title=\"Instagram\" href=\"https://instagram.com/xp4live\">Instagram</a></p>\r\n<p>&nbsp;</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n<p style=\"text-align: justify;\">&nbsp;</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 621,
                "name": "3daudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 207,
                "name": "Ableton",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 617,
                "name": "Abletonlive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 619,
                "name": "Immersivesound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 75,
                "name": "Jitter",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 290,
                "name": "M4l",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 103,
                "name": "MaxforLive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 616,
                "name": "Opengl",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 618,
                "name": "Spatialsound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1709,
            "forum_user": {
                "id": 1707,
                "user": 1709,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/profil4.png",
                "avatar_url": "/media/cache/49/37/4937ce84289a16db6f9d5ea374376dfb.jpg",
                "biography": "Fraction (Eric Raynaud) is a new media, composer and sound artist whose work focuses in particular on immersive and audiovisual experience  design.\n\nHis practice has developed from a background in music composition and spatial sound which led him to put together complete skills in the field of new media art. He now devotes his time writing and producing pieces integrating digital materials of different kinds.  He is particularly interested in forms of experience that have strong interactions between generative art and sonic matter. Combining complex scenography and hybrid digital writing with visuals, sound and physical media, he aims in particular to forge links between contemporary art and digital scope within the frame of radical experiences.\n\nFascinated by sound intensity, energy, ecstasy, and the idea of \"being able to sculpt digital disorder as a raw matter\", he finds in the lexicon of sound spatialization the appropriate field for designing atypical pieces, placing at the center of his writing the immediate physical and emotional experience.",
                "date_modified": "2025-12-29T12:55:11.027970+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "fraction",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "xp4l-a-flexible-spatial-sound-system-for-ableton-live",
        "pk": 986,
        "published": true,
        "publish_date": "2021-09-11T08:49:42+02:00"
    },
    {
        "title": "Future Perfect: installation et performance 3D audiovisuelle immersive",
        "description": "Résidence en recherche artistique 2017.18.\r\nGarth Paine.\r\nEn collaboration avec les équipe Espaces acoustiques et cognitifs et Interaction son musique mouvement de l’Ircam-STMS et le ZKM.",
        "content": "<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h1 class=\"dotted\"></h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>R&eacute;sidence en recherche artistique 2017.18</h3>\r\n<p><strong>Future Perfect: installation et performance 3D audiovisuelle immersive</strong><br />En collaboration avec les &eacute;quipe<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac/\">Espaces acoustiques et cognitifs</a><span>&nbsp;</span>et<span>&nbsp;</span><a href=\"https://www.ircam.fr/recherche/equipes-recherche/issm/\">Interaction son musique mouvement</a><span>&nbsp;</span>de l&rsquo;Ircam-STMS et le ZKM.</p>\r\n<p>Future Perfect est une performance audiovisuelle immersive 3D, pr&eacute;sent&eacute;e lors d&rsquo;un concert au ZKM, ainsi qu&rsquo;une installation d&eacute;velopp&eacute;e par Garth Paine. Se pr&eacute;sentant comme une &oelig;uvre musicale et visuelle en r&eacute;alit&eacute; virtuelle pour smartphone, elle repose sur le son 3D synchrone. Le projet propose une exp&eacute;rience personnelle explorant le lien entre la r&eacute;alit&eacute; virtuelle, prise comme format de documentation pour la recherche environnementale et l&rsquo;archivage de la nature (avec l&rsquo;id&eacute;e que la &laquo; nature &raquo; ainsi connue de nos jours n&rsquo;existera qu&rsquo;&agrave; travers les archives de la r&eacute;alit&eacute; virtuelle), et la notion m&ecirc;me de &laquo; virtuel &raquo; prise comme un monde imaginaire hyperr&eacute;el v&eacute;hicul&eacute; par une m&eacute;diation technologique. La r&eacute;sidence &agrave; l&rsquo;Ircam sera consacr&eacute;e au d&eacute;veloppement d&rsquo;un mapping entre l&rsquo;interaction du public, vu comme une foule munie de smartphones, et aux techniques interactives de spatialisation. Celle du ZKM sera en revanche concentr&eacute;e sur la composition d&rsquo;une &oelig;uvre musicale qui sera interpr&eacute;t&eacute;e dans le Klangdom. Cr&eacute;ant une exp&eacute;rience d&rsquo;&eacute;coute immersive et envo&ucirc;tante, elle s&rsquo;appuiera sur les enregistrements de terrain rassembl&eacute;s &agrave; Paris et &agrave; Karlsruhe. Il sera fourni au public des dispositifs VR HMD pour leurs smartphones, qui g&eacute;n&eacute;rera l&rsquo;environnement visuel de l&rsquo;&oelig;uvre au sein du Klangdom. Ajout&eacute;s &agrave; cela le dispositif de haut-parleurs du Klangdom, la projection d&rsquo;images incluant des sc&egrave;nes environnementales (repr&eacute;sentant des sanctuaires naturels que l&rsquo;on trouve dans ces deux villes), un monde VR &agrave; 360 degr&eacute;s, la performance sera per&ccedil;ue par le public non pas depuis un point de vue fixe mais, tout au contraire, comme de multiples voyages personnels &agrave; travers l&rsquo;&oelig;uvre multipliant les perspectives d&rsquo;&eacute;coute et de lecture.</p>\r\n<div class=\"row\">\r\n<div class=\"col-sm-9 col-sm-push-3 col-lg-9 col-lg-push-2 white-bg\">\r\n<h6 class=\"dotted\"></h6>\r\n<h1 class=\"dotted\">Garth Paine</h1>\r\n</div>\r\n</div>\r\n<div class=\"row\">\r\n<div class=\"col-sm-3 col-lg-2 page__sidebar\">\r\n<div>\r\n<figure class=\"person-list-box__image profile\" style=\"text-align: center;\"><img src=\"https://www.ircam.fr/media/uploads/personnels/recherche%20artistique%202018/.thumbnails/garth_paine.jpg/garth_paine-135x135.jpg\" alt=\"person\" /></figure>\r\n</div>\r\n</div>\r\n<div class=\"mb2 col-sm-9 col-lg-9 white-bg page__content\" data-summary-content=\"\">\r\n<h3>Biographie</h3>\r\n<p>Garth Paine est professeur de son num&eacute;rique et de m&eacute;dias interactifs &agrave; la School of Arts Media and Engineering and Digital Culture de l&rsquo;universit&eacute; d&rsquo;&Eacute;tat de l&rsquo;Arizona.Garth Paine a cr&eacute;&eacute; des environnements interactifs o&ugrave; le public, par sa pr&eacute;sence et son comportement, g&eacute;n&egrave;re le paysage sonore. Il a compos&eacute; plusieurs &oelig;uvres pour la danse g&eacute;n&eacute;r&eacute;es par captation vid&eacute;o et biod&eacute;tection en temps r&eacute;el. Il re&ccedil;oit un Green Room Award pour cr&eacute;ativit&eacute; exceptionnelle pour son &oelig;uvre Escape Velocity (Company in Space), et a &eacute;t&eacute; finaliste pour la meilleure partition pour danse contemporaine en 2014.Son travail a &eacute;t&eacute; pr&eacute;sent&eacute; dans le monde entier et, plus r&eacute;cemment, en Australie, aux &Eacute;tats-Unis, en Cor&eacute;e et en Europe, autour de performances pour percussions et live &eacute;lectronique, instruments &agrave; m&eacute;taux r&eacute;sonnants, mouvements dans&eacute;s, et robots jouant de bols tib&eacute;tains chantants. L&rsquo;&eacute;tendue de sa pratique musicale s&rsquo;exprime &agrave; travers sa recherche sonore trait&eacute;e comme mat&eacute;riau.Garth Paine a fond&eacute; et dirig&eacute; le Virtual, Interactive, Performance Research environment (VIPRe). Il est consid&eacute;r&eacute; comme un innovateur dans le domaine de l&rsquo;interaction de la musique exp&eacute;rimentale et son interpr&eacute;tation. Son parcours d&rsquo;&eacute;tude s&rsquo;&eacute;tend de la direction de la Taxonomy of Interfaces for Electronic Music performance (TIEM), des projets avec McGill et le EMF comme partenaires, la production d&rsquo;une base de donn&eacute;es en ligne pour le NIME, des articles sur l&rsquo;interaction et la somatique. Sa pr&eacute;sentation au keynote du NIME 2016 a esquiss&eacute; un cadre pour la conception d&rsquo;instruments num&eacute;riques.</p>\r\n</div>\r\n</div>\r\n<h2 class=\"dotted\">Liens</h2>\r\n<ul class=\"unstyled-list fss\">\r\n<li><a href=\"http://www.activatedspace.com/\" title=\"Link\" target=\"_blank\"><i class=\"fa fa-link\"></i><span>&nbsp;</span>http://www.activatedspace.com/</a></li>\r\n</ul>\r\n</div>\r\n</div>",
        "topics": [
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1,
            "forum_user": {
                "id": 1,
                "user": 1,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/04edfc0ef6c6cf6d6b88fbc69f9f9071?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "admin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "future-perfect-installation-et-performance-3d-audiovisuelle-immersive",
        "pk": 19,
        "published": true,
        "publish_date": "2019-03-21T12:31:22+01:00"
    },
    {
        "title": "Tutorials channel playlist",
        "description": "Youtube channel of the Ircam Forum tutorials",
        "content": "<p><iframe width=\"560\" height=\"315\" title=\"YouTube video player\" src=\"https://www.youtube.com/embed/videoseries?list=PL6MqWe5aRuOAnKBcJKAGjY4vbixGRqKat\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>",
        "topics": [
            {
                "id": 67,
                "name": "Forum",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 4,
                "name": "Ircam",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 35,
                "name": "Meta-forum",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 5,
            "forum_user": {
                "id": 5,
                "user": 5,
                "first_name": "Greg",
                "last_name": "Beller",
                "avatar": "https://forum.ircam.fr/media/avatars/TEDxParis_2017_le_6_novembre_au_GRAND_REX_.jpg",
                "avatar_url": "/media/cache/b1/6b/b16b01ff81fa6d7d4cad736a4aca83c3.jpg",
                "biography": "Greg Beller works as an artist, researcher, computer designer for contemporary arts, and a teacher. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student researching generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department, and the product manager of the IRCAM Forum. As founder of the Synekine Project, he is currently completing a second PhD at the HfMT Hamburg on \"Natural Interfaces for Computer Music\" in the creation and the performance of artistic moments.",
                "date_modified": "2026-02-26T11:43:02.073799+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1243,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 1,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    },
                    {
                        "id": 400,
                        "forum_user": 5,
                        "date_start": "1970-01-01",
                        "date_end": "2125-11-20",
                        "type": 0,
                        "keys": [
                            {
                                "id": 8,
                                "membership": 400
                            },
                            {
                                "id": 334,
                                "membership": 400
                            }
                        ],
                        "type_string": null,
                        "num_keys": 0,
                        "is_valid": true
                    }
                ]
            },
            "username": "beller",
            "first_name": "Greg",
            "last_name": "Beller",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 28,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 32,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 5,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 4,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 222,
                    "emitter_object_id": 80,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 50,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 401,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 275,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 713,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 427,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 834,
                    "user": 5,
                    "subscription_meta": {}
                },
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 22,
                    "user": 5,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "tutorials-channel-playlist",
        "pk": 2052,
        "published": true,
        "publish_date": "2023-02-09T15:19:30+01:00"
    },
    {
        "title": "APPEL À PROJETS – “ARTS NUMÉRIQUES, ART SONORE ET NOUVELLES ÉCRITURES” 23_24",
        "description": "Château Éphémère est heureux de lancer pour la 9ème année, son appel à projets \"Arts numériques, Art sonore & Nouvelles Écritures\" pour la saison 2023/2024. Tous les projets faisant concomitamment appel dans leur processus de production aux nouveaux usages du numérique ainsi qu’à la création sonore et musicale sont éligibles. ",
        "content": "<p><strong>Pour la 9&egrave;me ann&eacute;e, <a href=\"https://chateauephemere.org/\">Ch&acirc;teau &Eacute;ph&eacute;m&egrave;re</a> lance son appel &agrave; projets \"Arts num&eacute;riques, Art sonore &amp; Nouvelles &Eacute;critures\" pour la saison 2023_2024. Ce lieu d&eacute;di&eacute; &agrave; la cr&eacute;ation num&eacute;rique, sonore et musicale se donne trois objectifs principaux </strong>:</p>\n<p>&nbsp;</p>\n<p>&bull; Offrir &agrave; des artistes fran&ccedil;ais comme internationaux des r&eacute;sidences de cr&eacute;ation ;</p>\n<p>&bull; Promouvoir aupr&egrave;s du grand public les usages cr&eacute;atifs des nouvelles technologies &agrave; travers une gamme d&rsquo;ateliers associ&eacute;e &agrave; une programmation artistique mensuelle ;</p>\n<p>&bull; Proposer &agrave; tou&middot;te&middot;s un espace de rencontre et de convivialit&eacute;.</p>\n<p>&nbsp;</p>\n<p>Ce projet, soutenu notamment par la Communaut&eacute; Urbaine du Grand Paris Seine et Oise, la R&eacute;gion Ile-de-France et le Conseil D&eacute;partemental des Yvelines a &eacute;t&eacute; initi&eacute; par les associations Usines &Eacute;ph&eacute;m&egrave;res et Musiques et Cultures Digitales.</p>\n<p>Dans le cadre de son programme de r&eacute;sidences, Ch&acirc;teau &Eacute;ph&eacute;m&egrave;re lance la huiti&egrave;me &eacute;dition de son appel &agrave; candidatures dont la finalit&eacute; est de soutenir la cr&eacute;ation artistique sonore et num&eacute;rique tout en en faisant b&eacute;n&eacute;ficier le territoire et ses habitant&middot;e&middot;s.</p>\n<p><br><strong>&Eacute;ligibilit&eacute; :&nbsp;</strong></p>\n<p>&bull; Tous les projets faisant concomitamment appel dans leur processus de production aux nouveaux usages du num&eacute;rique ainsi qu&rsquo;&agrave; la cr&eacute;ation sonore et musicale sont &eacute;ligibles.&nbsp;</p>\n<p>&bull; Les projets peuvent &ecirc;tre d&eacute;pos&eacute;s par un&middot;e artiste, un collectif, une structure de production ou de diffusion.</p>\n<p>&bull; Le jury portera une attention particuli&egrave;re aux projets :&nbsp;<br>1 - souhaitant d&eacute;velopper des actions en direction du tissu local (ateliers, rencontres...).<br>2 - portant une r&eacute;flexion sur l&rsquo;environnement, que cela soit au travers du choix des mat&eacute;riaux utilis&eacute;s dans le cadre de la conception de l&rsquo;&oelig;uvre et/ou &agrave; travers le sujet m&ecirc;me du projet d&eacute;velopp&eacute; en r&eacute;sidence.</p>\n<p>&bull; Chaque r&eacute;sidence sera, en outre, susceptible d&rsquo;&ecirc;tre ponctu&eacute;e d&rsquo;une restitution publique.</p>\n<p>&nbsp;</p>\n<p><strong>Conditions d'accueil :</strong><br>Chacun des 12 projets laur&eacute;ats est accueilli entre les mois de septembre 2022 et juillet 2023 pour une dur&eacute;e d&rsquo;un mois maximum (s&eacute;cable en deux parties).</p>\n<p><strong>Cet accompagnement inclut :</strong></p>\n<p>&bull; La mise &agrave; disposition d&rsquo;un logement, d&rsquo;un espace de travail et du parc technique du Ch&acirc;teau &Eacute;ph&eacute;m&egrave;re.</p>\n<p>&bull; Les porteurs et porteuses de projets b&eacute;n&eacute;ficieront en outre de l&rsquo;accompagnement du r&eacute;gisseur technique ainsi que du LabManager en fonction de leurs disponibilit&eacute;s.</p>\n<p>&bull; Ils et elles se verront par ailleurs octroyer une aide &agrave; la production d&rsquo;un montant de 500&euro; TTC.</p>\n<p>&bull; Dans le cadre de notre partenariat avec les P&eacute;pini&egrave;res Europ&eacute;ennes de Cr&eacute;ation, une &agrave; deux bourses compl&eacute;mentaires, d&rsquo;une valeur de 500&euro; TTC chacune, seront accord&eacute;es &agrave; un ou deux projet&middot;s laur&eacute;at&middot;s et des opportunit&eacute;s de diffusion en Belgique pourront &eacute;galement &ecirc;tre envisag&eacute;es avec notre partenaire Transcultures, Centre des cultures num&eacute;riques et sonores bas&eacute; en Wallonie.</p>\n<p>&nbsp;</p>\n<p><strong>&nbsp;★ Les candidatures sont ouvertes jusqu'au 1er Mars 2023 12H.&nbsp;</strong></p>\n<p><strong>&nbsp;★ Infos et dossier de candidature :&nbsp;</strong><a href=\"https://urlz.fr/hCZh\">https://urlz.fr/hCZh</a>&nbsp; / <a href=\"https://chateauephemere.org/\">www.chateauephemere.org</a></p>\n<p><strong>&nbsp;★ Contact : aap@chateauephemere.org&nbsp;</strong></p>\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 27502,
            "forum_user": {
                "id": 27474,
                "user": 27502,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f457e8598e11a62a9411dff6f5151c81?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "chateauephemere",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "appel-a-projets-arts-numeriques-art-sonore-et-nouvelles-ecritures-23_24",
        "pk": 2028,
        "published": false,
        "publish_date": "2023-01-26T15:27:24.647162+01:00"
    },
    {
        "title": "ULYSSES Platform for contemporary music professionals",
        "description": "ULYSSES Platform forms a community of artists, artistic directors, managers and organizations working in the field of contemporary music. Becoming a member is free of charge.",
        "content": "<p><strong>The ULYSSES Platform&nbsp;</strong><a href=\"http://www.ulysses-network.eu\"> <span style=\"font-weight: 400;\">www.ulysses-network.eu</span></a></p>\r\n<p><span style=\"font-weight: 400;\">The ULYSSES Platform is an online platform for </span><span style=\"font-weight: 400;\">wide network of artists (musicians, composers, conductors), artistic directors, cultural managers and organizations from all over the world united by a common passion: contemporary music</span><span style=\"font-weight: 400;\">. The platform has to date over 3500 members and the amount is growing constantly.&nbsp;</span></p>\r\n<p>&nbsp;</p>\r\n<p><img src=\"/media/uploads/user/be072646929c81707eee286204c5bf28.jpg\" alt=\"\" width=\"900\" height=\"454\" /></p>\r\n<p>&nbsp;</p>\r\n<p><span style=\"font-weight: 400;\">As a member, you can use the ULYSSES Platform to:</span></p>\r\n<ul>\r\n<li><span style=\"font-weight: 400;\"> Create your profile with relevant information (biography, works&hellip;)</span></li>\r\n<li><span style=\"font-weight: 400;\"> Submit applications to calls</span></li>\r\n<li><span style=\"font-weight: 400;\"> Spot new works and possible collaborators</span></li>\r\n<li><span style=\"font-weight: 400;\"> Share artistic content and other documents of your work (scores, recordings, videos&hellip;) as well as information about events you&rsquo;re involved</span></li>\r\n<li><span style=\"font-weight: 400;\"> Communicate with other members</span></li>\r\n</ul>\r\n<p>&nbsp;</p>\r\n<p><span style=\"font-weight: 400;\">The heart of the platform is a community of individuals, ensembles and organizations who are active in contemporary music. Creating the profile is and will be free of charge. For organizations, the platform offers a tool to create and manage calls and other applications processes with automated procedures, and a community to promote their activities to and find talents for their projects from. </span><span style=\"font-weight: 400;\">ULYSSES Platform gives also an insight into projects realized by the ULYSSES Network. Under the headline &ldquo;Focus On&rdquo; there are interviews, articles and photo and video documentation of concerts and workshops organized throughout Europe. The ULYSSES platform crew warmly welcomes all interested people to join the community!<br /></span></p>\r\n<p>&nbsp;</p>\r\n<p><span style=\"font-weight: 400;\">Behind the Platform: The ULYSSES Network</span></p>\r\n<p><span style=\"font-weight: 400;\">The ULYSSES Network brings together 13 European partner institutions involved in the support and promotion of young artists. These institutions are academies, summer schools, ensembles and festivals devoted to contemporary music and developing of the careers of young European composers and performers. ULYSSES Network and also the current platform are supported by the Creative Europe Programme of the European Union.</span></p>\r\n<p><span style=\"font-weight: 400;\">More information about the ULYSSES Network </span><a href=\"http://project.ulysses-network.eu/\"><span style=\"font-weight: 400;\">http://project.ulysses-network.eu/</span></a></p>\r\n<p>&nbsp;</p>\r\n<p><span style=\"font-weight: 400;\">For any questions contact </span><span style=\"font-weight: 400;\">Community Manager Vilja Ruokolainen, </span><a href=\"mailto:community.ulysses@gmail.com\"><span style=\"font-weight: 400;\">community.ulysses@gmail.com</span></a></p>",
        "topics": [
            {
                "id": 160,
                "name": "Calls",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 159,
                "name": "Community",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 161,
                "name": "Competition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 96,
                "name": "Contemporary",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 115,
                "name": "Music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 158,
                "name": "Network",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17679,
            "forum_user": {
                "id": 17675,
                "user": 17679,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/8ffd93da801d5a4bf7b3486f329b12aa?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "ulysses",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "ulysses-platform-for-contemporary-music-professionals",
        "pk": 235,
        "published": true,
        "publish_date": "2019-08-15T14:03:43+02:00"
    },
    {
        "title": "OpenMusic 7.0",
        "description": "OpenMusic 7.0 has been released.",
        "content": "<div>\r\n<p><a href=\"/projects/detail/openmusic/\">OpenMusic </a> 7.0 has been released.</p>\r\n<p>This version is compatible for these operating systems:</p>\r\n<p>MacOS: 64bits - ARM and Intel processors<br />(separate installers, please install the one matching your processor)</p>\r\n<p>WINDOWS: 32 bits</p>\r\n<p>LINUX: 64 bits RPM and DEB packages, tar-ball</p>\r\n<p>NEW FEATURES</p>\r\n<ul>\r\n<li>OM image built with LispWorks 8.0</li>\r\n<li>Send And Receive in/out</li>\r\n<li>gkant, experimental rhythm quantifier based on omquantify managing grace-notes.</li>\r\n</ul>\r\n<p>IMPROVEMENTS</p>\r\n<ul>\r\n<li>reducetree recursive version.</li>\r\n<li>Comments now have an editor. Shorcuts:<br />-&lsquo;c&rsquo; opens a comment window<br />-&lsquo;o&rsquo; if comment is selected, opens the window comment for editing</li>\r\n</ul>\r\n<p>FIXES</p>\r\n<ul>\r\n<li>SOUND object reloads new sound after sound not found message</li>\r\n<li>mxml export debugging</li>\r\n<li>patch mode in score is really saved</li>\r\n<li>omloop copies input creation/deletion are refreshed</li>\r\n<li>Fix extent issue of chordseqs multiseq output</li>\r\n</ul>\r\n<p>Enjoy<br />K</p>\r\n</div>",
        "topics": [
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 224,
                "name": "Computer-aided composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 253,
                "name": "Composition Assistée par Ordinateur",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            },
            {
                "id": 377,
                "name": "Lisp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 14,
            "forum_user": {
                "id": 14,
                "user": 14,
                "first_name": "Karim",
                "last_name": "Haddad",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/1f556229c0742ef0586dd43d312f81a4?s=120&d=retro",
                "biography": "Karim Haddad was born in 1962 in Beirut Lebanon. He studied at the national conservatory there until it closed its doors in 1975 due to the civil war. He then went on to study philosophy and literature. Haddad received six awards from the CNSMD de Paris in addition to the Diplôme Supérieur de Composition with honors. He has worked with composers such as A.Bancquart, P. Mefano, K. Huber, and Emmanuel Nunes. This learning period is marked by his keen interest for non-tempered spaces and their strong relationship with temporal poetry. In 1992 and 1994 he took part in Ferienkursen für Musik in Darmstadt where he received a scholarship. In 1995, he took a class in computer music at IRCAM, and from that point on, the computer became the only tool he used for the elaboration of his works.\r\n\r\nAs a computer music expert, and more particularly as an expert in computer-assisted composition, in 2000 he is given the responsibility of technical support for the IRCAM Forum. He has developed several tools for the OpenMusic environment (synthesis control via Csound), as well as interfaces between this environment and score editors such as Finale and Lilypond.",
                "date_modified": "2026-02-18T11:08:17.096351+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 3,
                        "forum_user": 14,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-31",
                        "type": 0,
                        "keys": [
                            {
                                "id": 544,
                                "membership": 3
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "haddad",
            "first_name": "Karim",
            "last_name": "Haddad",
            "bookmarks": []
        },
        "slug": "openmusic-70",
        "pk": 1125,
        "published": false,
        "publish_date": "2022-03-14T19:45:27+01:00"
    },
    {
        "title": "Benny Sluchin and Mikhail Malt - Somax 2 - Update",
        "description": "Abstract : \nSomax 2 is an application for musical improvisation and composition. It is implemented in Max and is based on a generative model using a process similar to concatenative synthesis to provide stylistically coherent improvisation, while in real-time listening to and adapting to a musician (or any other type of audio or MIDI source). The model is operating in the symbolic domain and is trained on a musical corpus, consisting of one or multiple MIDI files, from which it draws its material used for improvisation. The model can be used with little configuration to autonomously interact with a musician, but it also allows manual control of its generative process, effectively letting the model serve as an instrument that can be played in its own right.",
        "content": "<p>Article :</p>\n<p>Somax 2 is a multi-agent interactive system performing live machine co-improvisation with musicians, based on machine-listening, machine-learning, and generative units. Agents provide stylistically coherent improvisations based on learned musical knowledge while continuously listening to and adapting to input from musicians or other agents in real time. The system is trained on any musical materials chosen by the user, effectively constructing a generative model (called a corpus), from which it draws its musical knowledge and improvisation skills. Corpora, inputs and outputs can be MIDI as well as audio, and inputs can be live or streamed from Midi or audio files. Somax 2 is one of the improvisation systems descending from the well known Omax software, presented here in a totally new implementation. As such it shares with its siblings the general loop [listen / learn / model / generate], using some form of statistical modeling that ends up in creating a highly organised memory structure from which it can navigate into new musical organisations, while keeping style coherence, rather than generating unheard sounds as other ML systems do. However Somax 2 adds a totally new versatility by being incredibly reactive to the musician decisions, and by putting its creative agents to communicate and work together in the same way, thanks to cognitively inspired interaction strategies and finely optimized concurrent architecture that make all its units smoothly cooperate together.<br>Somax 2 allows detailed parametric controls of its players and can even be played alone as an instrument in its own right, or even used in composition workflow. It is possible to listen to multiple sources and to create entire ensembles of agents where the user can control in detail how these agents interconnect and &ldquo;influence&rdquo; each others.<br>Somax 2 is conceived to be a co-creative partner in the improvisational process, where the system after some minimal tuning is able to behave in a self-sufficient manner and participate to a diversity of improvisation set-ups and even installations.</p>\n<p>This presentation will introduce the software environment, &nbsp; &nbsp;demonstrate its learning and interaction modes, explain the basic and advanced controls in the user interface, and present a real musical situation with &nbsp;famous trombonist &nbsp;Benny Sluchin improvising along with Somax 2.</p>\n<p>&nbsp;</p>\n<h4 id=\"bio\"><strong>Bio&nbsp;: </strong></h4>\n<p><strong>&nbsp;Benny Sluchin</strong></p>\n<p>Benny Sluchin studied at the Tel-Aviv Conservatory and Jerusalem Music Academy, parallel to pursuing math and philosophy degree at the University of Tel-Aviv. He joined the Israel Philharmonic Orchestra and was engaged as co-soloist for the Jerusalem Radio Symphony Orchestra. A member of the Ensemble intercontemporain since 1976, he has premiered numerous works and recorded <em>Keren</em> by Iannis Xenakis, the <em>Sequenza</em> <em>V </em>by Luciano Berio in addition to 19<sup>th</sup> and 20<sup>th</sup> century works for trombone.</p>\n<p>A doctor of Mathematics, Benny Sluchin is involved in acoustic research at Ircam. Passionate about teaching, he edited <em>Brass Urtext</em>, a series of original texts on teaching brass instruments. He published <em>Le trombone &agrave; travers les &acirc;ges</em> (Buchet-Chastel) with Raymond Lapie. Two of his books have been awarded the Sacem Prize for pedagogic publications: <em>Contemporary Trombone Excerpts</em> and <em>Jeu et chant simultan&eacute;s</em> <em>sur les cuivres</em>. His written publication on brass mutes is a benchmark and his research on <em>Computer Assisted Interpretation</em> has been the object of several presentations and scientific publications.</p>\n<p>As an application to his research, Benny has released a number of recordings of John Cage's music. His recent film <em>Iannis Xenakis, Le d&eacute;passement de soi</em>, has been produced by Mode Records.</p>\n<p>&nbsp;</p>\n<p><strong>Mikhail Malt</strong></p>\n<p>I am a Researcher in the&nbsp;<a href=\"https://www.stms-lab.fr/team/representations-musicales/\">Musical representations</a>&nbsp;team of IRCAM, Computer Music Designer Teacher (within the IRCAM Department of Pedagogy), Associate Research Director at Sorbonne University and Composer. I have a scientific and musical background (Engineering, composition and conducting) and my research focuses mainly on the theme of computer-assisted music writing (computer-assisted composition) and musical formalization.</p>\n<p>Since my arrival at IRCAM (October 1990 as a student and 1992 as a research composer) my main activity has been between research and teaching especially in the composition and computer music curriculum.</p>\n<p>Currently, my work is developing on three axes:&nbsp;</p>\n<p><strong>&bull;&nbsp;&nbsp;</strong>Modeling and musical representation: the study of the expressivity of formal models in computer-assisted composition, and in real-time generative music, and the modeling of open works),&nbsp;<br><strong>&bull; &nbsp;</strong>the development of interfaces and tools for computer-assisted composition,&nbsp;<br><strong>&bull;</strong>&nbsp;&nbsp;musical analysis and computer-assisted musical performance and musical creation.</p>\n<p>&nbsp;</p>\n<p><img alt=\"\" src=\"/media/uploads/Ateliers Paris 2022/.thumbnails/mikhail_malt.jpeg/mikhail_malt-392x237.jpeg\" style=\"display: block; margin-left: auto; margin-right: auto;\"></p>\n<p>&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 17684,
            "forum_user": {
                "id": 17680,
                "user": 17684,
                "first_name": "Mikhail",
                "last_name": "Malt",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/71f6031b5a9d3440a79ba06b4e4f528a?s=120&d=retro",
                "biography": "I am a Researcher in the Musical representations team of IRCAM, Computer Music Designer Teacher (within the IRCAM Department of Pedagogy), Associate Research Director at Sorbonne University and Composer. I have a scientific and musical background (Engineering, composition and conducting) and my research focuses mainly on the theme of computer-assisted music writing (computer-assisted composition) and musical formalization.\r\n\r\nSince my arrival at IRCAM (October 1990 as a student and 1992 as a research composer) my main activity has been between research and teaching especially in the composition and computer music curriculum.\r\n\r\nCurrently, my work is developing on three axes: \r\n\r\n•  Modeling and musical representation: the study of the expressivity of formal models in computer-assisted composition, and in real-time generative music, and the modeling of open works), \r\n•  the development of interfaces and tools for computer-assisted composition, \r\n•  musical analysis and computer-assisted musical performance and musical creation.",
                "date_modified": "2025-10-26T12:39:27.735828+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 341,
                        "forum_user": 17680,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-08",
                        "type": 0,
                        "keys": [
                            {
                                "id": 5,
                                "membership": 341
                            },
                            {
                                "id": 27,
                                "membership": 341
                            },
                            {
                                "id": 802,
                                "membership": 341
                            },
                            {
                                "id": 806,
                                "membership": 341
                            },
                            {
                                "id": 812,
                                "membership": 341
                            },
                            {
                                "id": 822,
                                "membership": 341
                            },
                            {
                                "id": 861,
                                "membership": 341
                            },
                            {
                                "id": 881,
                                "membership": 341
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "mmalt",
            "first_name": "Mikhail",
            "last_name": "Malt",
            "bookmarks": []
        },
        "slug": "benny-sluchin-and-mikhail-malt-somax-2-update",
        "pk": 1123,
        "published": false,
        "publish_date": "2022-03-11T18:01:27.776636+01:00"
    },
    {
        "title": "Appel à projets - \"Arts Numériques, Art Sonore & Nouvelles Écritures\" 22_23 Château Éphémère",
        "description": "Château Éphémère est heureux de lancer pour la 8ème année, son appel à projets \"Arts numériques, Art sonore & Nouvelles Écritures\" pour la saison 2022/2023. Tous les projets faisant concomitamment appel dans leur processus de production aux nouveaux usages du numérique ainsi qu’à la création sonore et musicale sont éligibles. ",
        "content": "<p><strong>Depuis huit ans, l&rsquo;association Vanderlab est implant&eacute;e dans un lieu patrimonial reconverti, le Ch&acirc;teau &Eacute;ph&eacute;m&egrave;re &agrave; Carri&egrave;res-sous-Poissy (Yvelines). Ce lieu d&eacute;di&eacute; &agrave; la cr&eacute;ation num&eacute;rique, sonore et musicale se donne trois objectifs principaux </strong>:</p>\n<p>&nbsp;</p>\n<p>&bull; Offrir &agrave; des artistes fran&ccedil;ais comme internationaux des r&eacute;sidences de cr&eacute;ation ;</p>\n<p>&bull; Promouvoir aupr&egrave;s du grand public les usages cr&eacute;atifs des nouvelles technologies &agrave; travers une gamme d&rsquo;ateliers associ&eacute;e &agrave; une programmation artistique mensuelle ;</p>\n<p>&bull; Proposer &agrave; tou&middot;te&middot;s un espace de rencontre et de convivialit&eacute;.</p>\n<p>&nbsp;</p>\n<p>Ce projet, soutenu notamment par la Communaut&eacute; Urbaine du Grand Paris Seine et Oise, la R&eacute;gion Ile-de-France et le Conseil D&eacute;partemental des Yvelines a &eacute;t&eacute; initi&eacute; par les associations Usines &Eacute;ph&eacute;m&egrave;res et Musiques et Cultures Digitales.</p>\n<p>Dans le cadre de son programme de r&eacute;sidences, Ch&acirc;teau &Eacute;ph&eacute;m&egrave;re lance la huiti&egrave;me &eacute;dition de son appel &agrave; candidatures dont la finalit&eacute; est de soutenir la cr&eacute;ation artistique sonore et num&eacute;rique tout en en faisant b&eacute;n&eacute;ficier le territoire et ses habitant&middot;e&middot;s.</p>\n<p><br><strong>&Eacute;ligibilit&eacute; :&nbsp;</strong></p>\n<p>&bull; Tous les projets faisant concomitamment appel dans leur processus de production aux nouveaux usages du num&eacute;rique ainsi qu&rsquo;&agrave; la cr&eacute;ation sonore et musicale sont &eacute;ligibles.&nbsp;</p>\n<p>&bull; Les projets peuvent &ecirc;tre d&eacute;pos&eacute;s par un&middot;e artiste, un collectif, une structure de production ou de diffusion.</p>\n<p>&bull; Le jury portera une attention particuli&egrave;re aux projets :&nbsp;<br>1 - souhaitant d&eacute;velopper des actions en direction du tissu local (ateliers, rencontres...).<br>2 - portant une r&eacute;flexion sur l&rsquo;environnement, que cela soit au travers du choix des mat&eacute;riaux utilis&eacute;s dans le cadre de la conception de l&rsquo;&oelig;uvre et/ou &agrave; travers le sujet m&ecirc;me du projet d&eacute;velopp&eacute; en r&eacute;sidence.</p>\n<p>&bull; Chaque r&eacute;sidence sera, en outre, susceptible d&rsquo;&ecirc;tre ponctu&eacute;e d&rsquo;une restitution publique.</p>\n<p>&nbsp;</p>\n<p><strong>Conditions d'accueil :</strong><br>&nbsp;<br>Chacun des 12 projets laur&eacute;ats est accueilli entre les mois de septembre 2022 et juillet 2023 pour une dur&eacute;e d&rsquo;un mois maximum (s&eacute;cable en deux parties).</p>\n<p><strong>Cet accompagnement inclut :</strong></p>\n<p>&bull; La mise &agrave; disposition d&rsquo;un logement, d&rsquo;un espace de travail et du parc technique du Ch&acirc;teau &Eacute;ph&eacute;m&egrave;re (inventaire technique &agrave; t&eacute;l&eacute;charger ici).</p>\n<p>&bull; Les porteurs et porteuses de projets b&eacute;n&eacute;ficieront en outre de l&rsquo;accompagnement du r&eacute;gisseur technique ainsi que du LabManager en fonction de leurs disponibilit&eacute;s.</p>\n<p>&bull; Ils et elles se verront par ailleurs octroyer une aide &agrave; la production d&rsquo;un montant de 500&euro; TTC.</p>\n<p>&bull; Dans le cadre de notre partenariat avec les P&eacute;pini&egrave;res Europ&eacute;ennes de Cr&eacute;ation, une &agrave; deux bourses compl&eacute;mentaires, d&rsquo;une valeur de 500&euro; TTC chacune, seront accord&eacute;es &agrave; un ou deux projet&middot;s laur&eacute;at&middot;s et des opportunit&eacute;s de diffusion en Belgique pourront &eacute;galement &ecirc;tre envisag&eacute;es avec notre partenaire Transcultures, Centre des cultures num&eacute;riques et sonores bas&eacute; en Wallonie.</p>\n<p><br><strong>&nbsp;★ Les candidatures sont ouvertes jusqu'au 3 Avril 2022 - 12h.&nbsp;</strong></p>\n<p><strong>&nbsp;★ Infos et dossier de candidature :&nbsp;</strong><a href=\"https://urlz.fr/hCZh\">https://urlz.fr/hCZh</a>&nbsp; / <a href=\"https://chateauephemere.org/\">www.chateauephemere.org</a></p>\n<p><strong>&nbsp;★ Contact : aap@chateauephemere.org&nbsp;</strong></p>",
        "topics": [],
        "user": {
            "pk": 27502,
            "forum_user": {
                "id": 27474,
                "user": 27502,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f457e8598e11a62a9411dff6f5151c81?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "chateauephemere",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "appel-a-projets-arts-numeriques-art-sonore-nouvelles-ecritures-22_23-chateau-ephemere",
        "pk": 1117,
        "published": false,
        "publish_date": "2022-03-07T16:50:39.788225+01:00"
    },
    {
        "title": "Celestial Armillary and Ubiquitous Wave",
        "description": "Celestial Armillary and Ubiquitous Wave is a multimedia experience combining Spatial Audio and Virtual Reality that explores the cognition of sound and the cosmos.",
        "content": "<p><span><em><strong><img alt=\"\" src=\"/media/uploads/user/4e34f3646a5dec0e54c1583520410748.jpeg\"></strong></em></span></p>\n<p>&nbsp;</p>\n<p><span><em><strong>Celestial Armillary and Ubiquitous Wave</strong></em> is a multimedia two-perspective (on-site/ virtual) experience of the same theme that explores the cognition of sound and the cosmos in a multisensory context.</span></p>\n<p>&nbsp;</p>\n<p><span>This project is inspired by modern and ancient Chinese observational cosmology,&nbsp; we hope to translate the model through sound and visual language to create a new version&mdash; a passage that can link the past and now. In the 4th century B.C., Chinese ancients began to use the armillary sphere to measure and interpret celestial objects. It was used to construct perceptions of the external world. In this age of modern technology, astronomical data measurement and sonification are also iterating to explore the human-universe relationship. A new awareness of the universe is provoked by utilizing Higher-Order Ambisonics(HOA) sound experience and Virtual Reality experience. These two experiences perform in parallel and create a mirror heterotopia.</span></p>\n<p>&nbsp;</p>\n<p><span>The first spatial sound experience transports the audience to the center of a giant armillary in space. At the same time, the moving image creates a &ldquo;remeasurement&rdquo; of the Asian Astronomical map and responds to the experimental music. The rotation of the giant armillary sphere accompanies with various Chinese instruments, such as the Zither, Flute, Xun, Drums, etc,. Starting from the Sun, it will take the audience on a slow Astronomical sound wave journey through the nine planets.</span></p>\n<p><img alt=\"\" src=\"/media/uploads/user/63bd20f53360b3fd8b3585bd82e0f385.png\"></p>\n<p><span>In our second virtual reality experience, the ambisonic sound and interactive experience immerse people into a world of space measurement. By 6Dof, each step the audience takes changes the armillary sphere and its distance from the planets. The audience is encouraged to use their entities in the virtual space to measure the space-time transformation of the universe. Through the perception of spatial changes in the armillary sphere and the planet, this experience amplifies the sense of embodiment.</span></p>\n<p>&nbsp;</p>\n<p><span>Playing a role as a key, the multi-sensory experience will open the threshold to a cosmic archaeological experience of the universe for the audiences.</span></p>",
        "topics": [
            {
                "id": 1124,
                "name": "cosmology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 369,
                "name": "Multichannel",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1125,
                "name": "multimedia",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 620,
                "name": "Spatialaudio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 301,
                "name": "Virtual reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 702,
                "name": "Waves",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 32631,
            "forum_user": {
                "id": 32583,
                "user": 32631,
                "first_name": "ke",
                "last_name": "peng",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/754ed83e3ecb6b7174fd307abb9467b5?s=120&d=retro",
                "biography": "Ke Peng(PK) is a multidisciplinary artist and designer working with multiple mediums, currently studying Information Experience Design at the Royal College of Art. Her interest in creation resides in visual, sound, light, and new materialism. Her research materializes as installations, audiovisuals, and digital arts aiming to create multimedia experiences that make the invisible manifest.",
                "date_modified": "2023-02-06T17:12:28+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "kpeng",
            "first_name": "ke",
            "last_name": "peng",
            "bookmarks": []
        },
        "slug": "celestial-armillary-and-ubiquitous-wave",
        "pk": 2039,
        "published": false,
        "publish_date": "2023-02-06T19:08:29.027179+01:00"
    },
    {
        "title": "Somax 2 est sorti !",
        "description": "Somax 2 est une version complètement repensée du mythique paradigme de co-improvisation réactive Somax conçu dans l'équipe Représentations musicales de l'ircam mais jamais encore distribué.\r\nSomax 2 est une application pour l'improvisation et la composition musicale basée sur un modèle génératif original, une écoute artificielle et des multi-agents réactifs . Il génère des improvisations stylistiquement cohérentes, tout en écoutant et en s'adaptant de manière stricte ou flottante à un musicien (ou tout autre type de source audio ou MIDI) en temps réel.",
        "content": "<p><a href=\"/projects/detail/somax-2/\" target=\"_blank\" rel=\"noopener\">Somax 2</a> is an application for musical improvisation and composition. It is implemented in Max and is based on a generative model using a process similar to concatenative synthesis to provide stylistically coherent improvisation, while listening to and adapting to a musician (or any other type of audio or MIDI source) in real-time. The model is operating in the symbolic domain and is trained on a musical corpus, consisting of one or multiple MIDI files, from which it draws its material used for improvisation. The model can be used with little configuration to autonomously interact with a musician, but it also allows manual control of its generative process, effectively letting the model serve as an instrument that can be played in its own right.</p>\r\n<p>While the application can be used straight out of the box with little configuration (see Getting Started), it is also designed as a library, allowing the user to create custom models as well as set up networks of multiple models and sources that are listening to and interacting with each other.</p>\r\n<p>Somax 2 is a totally new version of the mythical Somax reactive co-improvisation paradigm designed in the ircam Music Representation team but never publicly released yet. Written in Max and Python,&nbsp;it features a modular multithreaded implementation, multiple wireless interacting players, new UI design with tutorials and documentation, as well as a number of new interaction parameters.</p>\r\n<p><a href=\"https://www.stms-lab.fr/projects/pages/somax2/\" target=\"_blank\" rel=\"noopener\">READ MORE ...</a></p>",
        "topics": [
            {
                "id": 281,
                "name": "Composition",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 546,
                "name": "Ia",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 400,
                "name": "Interactive machine learning",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 583,
                "name": "Omax",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17748,
            "forum_user": {
                "id": 17743,
                "user": 17748,
                "first_name": "Gerard",
                "last_name": "Assayag",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e7f22ca09fef8b854d33ed5de26b107e?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-11-03T15:40:57.523680+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 1236,
                        "forum_user": 17743,
                        "date_start": "1970-01-01",
                        "date_end": "2126-04-04",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "assayag",
            "first_name": "Gerard",
            "last_name": "Assayag",
            "bookmarks": []
        },
        "slug": "somax-2-est-sorti",
        "pk": 958,
        "published": true,
        "publish_date": "2021-06-07T16:22:01+02:00"
    },
    {
        "title": "Production and broadcasting chain used during the Fip360 \"Electro\" concerts -  Hervé DEJARDIN",
        "description": "Presentation during the Ircam Forum Workshop 2023 In Paris",
        "content": "<p>Explanation of a use case of the object-oriented production mode allowing simultaneously a 360&deg; sound broadcast, a binaural web broadcast and the recording of an audio multitrack with its associated metadata.</p>",
        "topics": [],
        "user": {
            "pk": 9457,
            "forum_user": {
                "id": 9454,
                "user": 9457,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/HERVE_DEJARDIN_BIO.jpg",
                "avatar_url": "/media/cache/8d/6d/8d6d8a442f5fe8c422f2607391d5c393.jpg",
                "biography": "Project manager in the audio innovation unit of Radio France.\r\n\r\nHe works on different aspects of the creation, production and distribution of immersive and object-based audio content.\r\nHis work is intended for the wide range of Radio France productions (Reports, documentaries, drama, sporting events, all musical styles ...)\r\nCurrently, his favorite field is electronic music.\r\nWith FIP, he collaborates on FIP 360 collection, which offers electronic concerts in 360° immersive sound produced in object-based audio.  \r\nHe also collaborates with Jean-Michel Jarre, Arthur-H, Molécule and Irène Drésel on various ongoing projects.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Dejardin",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "presentation-and-demonstration-of-the-production-and-broadcasting-chain-used-during-the-fip360-electro-concerts",
        "pk": 2063,
        "published": true,
        "publish_date": "2023-02-14T17:14:15+01:00"
    },
    {
        "title": "The World is Over! Une introspection sonore ou un référendum musical ?",
        "description": "Entre le 1er avril et le 17 juin 2020 chaque participant a été invité à envoyer un morceau sur une des faces :\r\nLa face A pour arrêter l’humanité et la face B pour continuer.\r\nLes morceaux étaient mis en ligne au jour le jour afin de pouvoir voir si la balance penchait d'un coté ou de l'autre. 92 morceaux ont été collectés répartis sur 2 faces, pour une durée totale de 10h 23mn.",
        "content": "<p><img class=\"bbc_img\" src=\"http://www.necktar.info/Over/Or/images/ANIMA_PIKA.gif\" alt=\"ANIMA_PIKA.gif\" width=\"777\" height=\"777\" /><br /><br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/details/VA_The_World_is_Over__A_side\" rel=\"nofollow external\">Face A</a>&nbsp; / <a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/details/VA_The_World_is_Over__B_side\" rel=\"nofollow external\">Face B</a><br />Liens de t&eacute;l&eacute;chargement facile&nbsp;<br />Face A en<br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/compress/VA_The_World_is_Over__A_side/formats=WAVE&amp;file=/VA_The_World_is_Over__A_side.zip\" rel=\"nofollow external\">WAV</a><br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/compress/VA_The_World_is_Over__A_side/formats=FLAC&amp;file=/VA_The_World_is_Over__A_side.zip\" rel=\"nofollow external\">FLAC </a><br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/compress/VA_The_World_is_Over__A_side/formats=VBR%20MP3&amp;file=/VA_The_World_is_Over__A_side.zip\" rel=\"nofollow external\">MP3</a><br />Face B en<br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/compress/VA_The_World_is_Over__B_side/formats=WAVE&amp;file=/VA_The_World_is_Over__B_side.zip\" rel=\"nofollow external\">WAV</a><br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/compress/VA_The_World_is_Over__B_side/formats=FLAC&amp;file=/VA_The_World_is_Over__B_side.zip\" rel=\"nofollow external\">FLAC</a><br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/compress/VA_The_World_is_Over__B_side/formats=VBR%20MP3&amp;file=/VA_The_World_is_Over__B_side.zip\" rel=\"nofollow external\">MP3 </a><br /><br />Qu&rsquo;est-ce que \"The World is Over!\" :<br />Une &eacute;trange ordalie audio ?<br />Ou juste une meta compilation pour choisir si l&rsquo;humanit&eacute; devrait s&rsquo;arr&ecirc;ter ou continuer ?<br />Les deux sans doute&hellip; et peut &ecirc;tre <a class=\"bbc_url\" title=\"Lien externe\" href=\"http://necktar.info/Over/Or/experience_musicale_conceptuelle_en_t%C3%A9l%C3%A9chargement_gratuit_FR.html\" rel=\"nofollow external\">plus encore</a>.<br /><br />Il y a un brouillon de carnet de bord pour comprendre un peu mieux ce projet. En particulier pour suivre la r&eacute;flexion. Cela a commenc&eacute; comme une qu&ecirc;te du libre arbitre et cela se termine plus ou moins par cette constatation :<br />Pourquoi les deux faces sont &eacute;quilibr&eacute;es ?<br />Peut &ecirc;tre &agrave; cause du genre de confusion comme le fait que choisir la face A pouvait &ecirc;tre interpr&eacute;t&eacute; comme arr&ecirc;ter de ne pas changer. Et la face B continuer d&rsquo;essayer de changer. Ou alors c&rsquo;est r&eacute;ellement principalement parce que celles et ceux qui pourraient changer le monde feront l&rsquo;impossible pour &ecirc;tre bienveillants envers l&rsquo;autre cot&eacute;, et feront de leur mieux pour ne tuer aucun ennemi, malgr&eacute; les cons&eacute;quences pour tous&hellip; Et que celles et ceux qui emp&ecirc;chent le monde de changer feront de leur mieux pour &ecirc;tre malveillants envers l&rsquo;autre cot&eacute;, et feront de leur mieux pour &eacute;liminer tous leurs ennemis quelques en soient les cons&eacute;quences. Quoi qu&rsquo;il en soit on ne peut changer que nous-m&ecirc;me. Est-ce que j&rsquo;ai dis que le Yin-Yang est un symbole d&rsquo;&eacute;quilibre ? Eh bien trouve tes propres r&eacute;ponses, ce carnet de bord n&rsquo;est que la qu&ecirc;te des miennes. Et sois patient car j&rsquo;ai besoin de l&rsquo;am&eacute;liorer un peu&hellip;<br />Si tu veux plonger dedans il est ici :<br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"http://www.necktar.info/Over/Or/LogBook.html\" rel=\"nofollow external\">http://www.necktar.i...Or/LogBook.html</a><br /><br />Il y a aussi beaucoup plus d'informations sur la th&eacute;matique et de quoi &eacute;clairer les lanternes ici <a class=\"bbc_url\" title=\"Lien externe\" href=\"http://www.librescommeres.fr/read/219\" rel=\"nofollow external\">http://www.librescommeres.fr/read/219</a><br /><br />Un pas de plus dans un process plus long...<br />Ainsi c&rsquo;est un second volume hors s&eacute;rie de Necktar apr&egrave;s PIKADON. <a class=\"bbc_url\" title=\"Lien externe\" href=\"https://archive.org/details/How_To_Download_Free_Music_Unusual_Album_Remix_Pikadon\" rel=\"nofollow external\">https://archive.org/...m_Remix_Pikadon</a> Necktar est difficile &agrave; d&eacute;finir... ce qui est certain c&rsquo;est que le voyage multidimensionnel &agrave; commenc&eacute; il y a 20 ans. <a class=\"bbc_url\" title=\"Lien externe\" href=\"http://www.necktar.info\" rel=\"nofollow external\">http://www.necktar.info</a><br /><br />Pourquoi The World is Over! est apparu en 2020 ?<br />C&rsquo;&eacute;tait pr&eacute;vu depuis de nombreux mois, j&rsquo;en ai parl&eacute; pour la premi&egrave;re fois en ao&ucirc;t 2019. Je voulais trouver quelque chose qui soit une &eacute;tape avant le dernier Necktar. J&rsquo;ai regard&eacute; un documentaire sur Yoko Ono et John Lennon et alors j&rsquo;ai revu cette photo exceptionnelle de l&rsquo;action &laquo; The War is Over! &raquo; et j&rsquo;ai pens&eacute; quelque chose comme :<br />- C&rsquo;est terrible, depuis pr&egrave;s d&rsquo;un demi si&egrave;cle rien n&rsquo;a chang&eacute; ! A cette &eacute;poque le plus grave probl&egrave;me &eacute;tait la guerre, actuellement cela semble la fin du monde avec le r&eacute;chauffement climatique, mais en r&eacute;alit&eacute; c&rsquo;est encore la guerre : la guerre du capitalisme contre la vie. (Ce n&rsquo;est que mon opinion) Alors j&rsquo;ai trouv&eacute; cette id&eacute;e de d&eacute;tournement car un &eacute;v&eacute;nement disruptif me semble la meilleur fa&ccedil;on de pouvoir r&eacute;aliser que les choses ne sont pas impossible &agrave; changer. Et dans tout les cas un peu d&rsquo;introspection n&rsquo;a jamais fait de mal &agrave; personne. Parlons peu, l&rsquo;exp&eacute;rimentation est pr&eacute;f&eacute;rable. Ainsi entre le premier avril et le 17 juin 2020, ce projet a &eacute;t&eacute; cr&eacute;&eacute; sous la forme d&rsquo;un &laquo; work-in-progress &raquo; afin de montrer les changements dans la playliste et de suivre l&rsquo;&eacute;volution du projet en temps r&eacute;el.<br /><br />Informations Techniques :<br />En raison de la nature extr&ecirc;mement diverse des morceaux j&rsquo;ai essay&eacute; de changer le moins possible leur niveau sonore, pour une version sans aucune modification, merci de passer par les pages bandcamp Face A <a class=\"bbc_url\" title=\"Lien externe\" href=\"https://yoshiwaku.bandcamp.com/album/the-world-is-over-a-side\" rel=\"nofollow external\">https://yoshiwaku.ba...-is-over-a-side</a> &amp; Face B <a class=\"bbc_url\" title=\"Lien externe\" href=\"https://zonefusion.bandcamp.com/album/the-world-is-over-b-side\" rel=\"nofollow external\">https://zonefusion.b...-is-over-b-side</a><br /><br />Page Officielle :<br /><a class=\"bbc_url\" title=\"Lien externe\" href=\"https://www.necktar.info/Over/Or/\" rel=\"nofollow external\">https://www.necktar.info/Over/Or/</a><br /><br />Video Art Clips :<br />Teaser The World is Over!<br /><iframe width=\"640\" height=\"390\" id=\"ytplayer\" class=\"EmbeddedVideo\" src=\"https://youtube.com/embed/lj__bFtaLrM?html5=1&amp;fs=1\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"></iframe><br />Easter Eggghost Track<br /><iframe width=\"640\" height=\"390\" id=\"ytplayer\" class=\"EmbeddedVideo\" src=\"https://youtube.com/embed/z34Iugf3FRM?html5=1&amp;fs=1\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"></iframe><br /><br /><br />D&eacute;dicace :<br />La compilation The World is Over! est d&eacute;di&eacute;e &agrave; celles et ceux qui ne baissent jamais les bras face &agrave; l&rsquo;adversit&eacute; et ainsi finissent souvent par d&eacute;couvrir des chemins in&eacute;dits&hellip; voir m&ecirc;me parfois &agrave; parvenir &agrave; exercer leur libre arbitre.<br /><br />Remerciements :<br />Merci &agrave; Yoko Ono et John Lennon pour avoir cr&eacute;&eacute; &laquo; The War is Over! &raquo;<br />Merci &agrave; celles et ceux qui ont fait le choix de permettre &agrave; cette compilation d&rsquo;exister,<br />sans avoir eu peur de remettre en cause le dogme de la survie de l&rsquo;esp&egrave;ce humaine.<br />Merci &agrave; tous les auditeurs qui se poseront cette question :<br />- Est-ce que si j&rsquo;avais le choix je choisirais que l&rsquo;humanit&eacute; s&rsquo;arr&ecirc;te ou qu&rsquo;elle continue ?<br /><br />Le mot de la fin sur le pand&eacute;mie du Covid-19 :<br />Apr&egrave;s avoir constat&eacute; qu&rsquo;il s&rsquo;agit d&rsquo;une simple question de volont&eacute; pour pouvoir emp&ecirc;cher le cataclysme climatique. &laquo;The Wold is over! If you want it... &raquo;. R&eacute;sidant en France, peu de temps apr&egrave;s que le premier confinement soit termin&eacute;, j&rsquo;ai dis &agrave; une amie :<br />- La sortie de confinement ici me fait un peu penser &agrave; un pendu qui d&eacute;fait sa corde pour aller pisser et ensuite qui retourne s&rsquo;accrocher.<br />Ainsi cela ne d&eacute;pend que de nous pour que le monde ne soit pas finit.<br />.<br /><img class=\"bbc_img\" src=\"http://www.necktar.info/Over/Or/images/MEME_THEWORLDISOVER_lofi.gif\" alt=\"MEME_THEWORLDISOVER_lofi.gif\" width=\"701\" height=\"1045\" /></p>",
        "topics": [
            {
                "id": 143,
                "name": "Ecology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 297,
                "name": "Electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 335,
                "name": "Instrumental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1,
                "name": "OpenMusic",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 436,
                "name": "Recherche artistique",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 437,
                "name": "Recherche musicale",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18608,
            "forum_user": {
                "id": 18601,
                "user": 18608,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/bd4bf0b913ceca4b3c67e17956748cf5?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "robot_meyrat",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-world-is-over-une-instrospection-sonore-ou-un-referendum-musical",
        "pk": 717,
        "published": true,
        "publish_date": "2020-06-29T08:58:11+02:00"
    },
    {
        "title": "Tweak de la semaine (W28)",
        "description": "Flûtes et basses créent des formes avec l'interférence des ondes de sinus !",
        "content": "<div style=\"position: relative; padding-bottom: 65%; height: 0;\"><iframe width=\"300\" height=\"150\" style=\"position: absolute; top: 0; left: 0; width: 100%; height: 100%;\" src=\"https://tweakable.org/embed/examples/kustar3_v1?view=panel\" frameborder=\"0\"></iframe></div>\r\n<p><strong>Cr&eacute;ez votre propre Tweakable sur&nbsp;<a href=\"https://tweakable.org\">tweakable.org</a></strong></p>",
        "topics": [
            {
                "id": 169,
                "name": "Interaction",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 115,
                "name": "Music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 426,
                "name": "Tweakable",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 427,
                "name": "Tweakoftheweek",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18424,
            "forum_user": {
                "id": 18417,
                "user": 18424,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d36f7c122c36bf714b376ed2c132c929?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jwvsys",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tweak-of-the-week-28",
        "pk": 733,
        "published": true,
        "publish_date": "2020-08-03T12:48:45+02:00"
    },
    {
        "title": "News on Somax - Mikhail Malt, Marco Fiorini",
        "description": "Presented during the IRCAM Forum Workshop 2023 In Paris",
        "content": "<div class=\"\"><span class=\"\"><span face=\"Verdana\" style=\"font-family: Verdana;\">Somax 2.5 is an application and a library for live co-creative interaction with musicians in improvisation composition or installation scenarios.&nbsp;<br class=\"\" />It is based on a machine listening, reactive engine and &nbsp;generative model &nbsp;that provide stylistically coherent improvisation while continuously adapting to the external&nbsp;audio or midi musical context.&nbsp;It uses a cognitive memory model based on music corpuses it analyzes and learns as stylistic bases, using a process similar to concatenative synthesis to render the&nbsp;result, and it relies on a globally learned harmonic and textural knowledge representation space using Machine Learning techniques.<br class=\"\" /><br class=\"\" />Somax2 has been totally rewritten from Somax, one of the multiple descendants of the well known Omax developed in the Music Representation team over the years and&nbsp;</span></span><span class=\"\">now</span>&nbsp;&nbsp;<span class=\"\">offers</span><span class=\"\">&nbsp;a powerful and reliable environment for co-improvisation, composition, installations, etc.&nbsp;</span><span class=\"\">Written in Max and Python, it features a modular multithreaded implementation, multiple wireless interacting players (AI agents), new UI design with tutorials and</span><span class=\"\">&nbsp;</span><span class=\"\">documentation, as well as a number of new interaction flavors and parameters.</span></div>\r\n<div class=\"\"><span class=\"\"><span face=\"Verdana\" style=\"font-family: Verdana;\"><br class=\"\" />In the new 2.5 version, it is also now designed as a Max library, allowing the user to program custom Somax2 patches allowing everybody to design one's own environment&nbsp;and processing, involving as many sources, players, influencers, renderers as needed.&nbsp;With these abstractions, implemented to provide complete Max-style programming and workflow, the user could achieve the same results as the Somax2 application but,&nbsp;thanks to their modular architecture, it is also possible to build custom patches and unlock unseen behaviors of interaction and control.<br class=\"\" />&nbsp;<br class=\"\" />Somax2 is developed by the Music Representation team at IRCAM and is part of ANR project MERCI (Mixed Musical Reality with Creative Instruments) and ERC REACH (Raising&nbsp;Co-creativity in Cyber-Human Musicianship) project.<br class=\"\" /><br class=\"\" />More at&nbsp;<a href=\"http://repmus.ircam.fr/somax2\" class=\"\">repmus.ircam.fr/somax2</a></span></span></div>",
        "topics": [],
        "user": {
            "pk": 17684,
            "forum_user": {
                "id": 17680,
                "user": 17684,
                "first_name": "Mikhail",
                "last_name": "Malt",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/71f6031b5a9d3440a79ba06b4e4f528a?s=120&d=retro",
                "biography": "I am a Researcher in the Musical representations team of IRCAM, Computer Music Designer Teacher (within the IRCAM Department of Pedagogy), Associate Research Director at Sorbonne University and Composer. I have a scientific and musical background (Engineering, composition and conducting) and my research focuses mainly on the theme of computer-assisted music writing (computer-assisted composition) and musical formalization.\r\n\r\nSince my arrival at IRCAM (October 1990 as a student and 1992 as a research composer) my main activity has been between research and teaching especially in the composition and computer music curriculum.\r\n\r\nCurrently, my work is developing on three axes: \r\n\r\n•  Modeling and musical representation: the study of the expressivity of formal models in computer-assisted composition, and in real-time generative music, and the modeling of open works), \r\n•  the development of interfaces and tools for computer-assisted composition, \r\n•  musical analysis and computer-assisted musical performance and musical creation.",
                "date_modified": "2025-10-26T12:39:27.735828+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 341,
                        "forum_user": 17680,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-08",
                        "type": 0,
                        "keys": [
                            {
                                "id": 5,
                                "membership": 341
                            },
                            {
                                "id": 27,
                                "membership": 341
                            },
                            {
                                "id": 802,
                                "membership": 341
                            },
                            {
                                "id": 806,
                                "membership": 341
                            },
                            {
                                "id": 812,
                                "membership": 341
                            },
                            {
                                "id": 822,
                                "membership": 341
                            },
                            {
                                "id": 861,
                                "membership": 341
                            },
                            {
                                "id": 881,
                                "membership": 341
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "mmalt",
            "first_name": "Mikhail",
            "last_name": "Malt",
            "bookmarks": []
        },
        "slug": "news-on-somax-mikhail-malt-marco-fiorini",
        "pk": 2141,
        "published": true,
        "publish_date": "2023-03-15T11:56:58+01:00"
    },
    {
        "title": "Spectrogram inpainting for interactive generation of instrument sounds",
        "description": "Theis Bazin is making a all day demo session in the -2 @Ircam-Forum about \"Spectrogram inpainting for interactive generation of instrument sounds\". in short Using AI \"to reconstruct\" parts of a spectrogram \"having as a model\" instrumental sounds ",
        "content": "<p>Theis Bazin (doctorate) is making an all day demo session, in the -2 @Ircam-Forum about <strong>\"Spectrogram inpainting for interactive generation of instrument sounds\"</strong>. In short, using AI \"to reconstruct\" parts of a spectrogram \"having as a model\" instrumental sounds !!!</p>\n<p><img alt=\"\" src=\"/media/uploads/user/654156de0d608b2e13a7e36ecdaf01c9.jpg\"></p>",
        "topics": [
            {
                "id": 753,
                "name": "Artificial intelligence,",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 265,
                "name": "Sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17684,
            "forum_user": {
                "id": 17680,
                "user": 17684,
                "first_name": "Mikhail",
                "last_name": "Malt",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/71f6031b5a9d3440a79ba06b4e4f528a?s=120&d=retro",
                "biography": "I am a Researcher in the Musical representations team of IRCAM, Computer Music Designer Teacher (within the IRCAM Department of Pedagogy), Associate Research Director at Sorbonne University and Composer. I have a scientific and musical background (Engineering, composition and conducting) and my research focuses mainly on the theme of computer-assisted music writing (computer-assisted composition) and musical formalization.\r\n\r\nSince my arrival at IRCAM (October 1990 as a student and 1992 as a research composer) my main activity has been between research and teaching especially in the composition and computer music curriculum.\r\n\r\nCurrently, my work is developing on three axes: \r\n\r\n•  Modeling and musical representation: the study of the expressivity of formal models in computer-assisted composition, and in real-time generative music, and the modeling of open works), \r\n•  the development of interfaces and tools for computer-assisted composition, \r\n•  musical analysis and computer-assisted musical performance and musical creation.",
                "date_modified": "2025-10-26T12:39:27.735828+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 341,
                        "forum_user": 17680,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-08",
                        "type": 0,
                        "keys": [
                            {
                                "id": 5,
                                "membership": 341
                            },
                            {
                                "id": 27,
                                "membership": 341
                            },
                            {
                                "id": 802,
                                "membership": 341
                            },
                            {
                                "id": 806,
                                "membership": 341
                            },
                            {
                                "id": 812,
                                "membership": 341
                            },
                            {
                                "id": 822,
                                "membership": 341
                            },
                            {
                                "id": 861,
                                "membership": 341
                            },
                            {
                                "id": 881,
                                "membership": 341
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "mmalt",
            "first_name": "Mikhail",
            "last_name": "Malt",
            "bookmarks": []
        },
        "slug": "spectrogram-inpainting-for-interactive-generation-of-instrument-sounds",
        "pk": 1135,
        "published": false,
        "publish_date": "2022-03-25T11:23:19.814204+01:00"
    },
    {
        "title": "Moving Towards Synchrony",
        "description": "Moving Towards Synchrony is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.",
        "content": "<p>This is a link to the video presentaton:&nbsp;<br><a href=\"https://vimeo.com/514333273\">https://vimeo.com/514333273&nbsp;</a></p>\n<p><strong>Introduction:</strong></p>\n<p>My name is Johnny Tomasiello and I am a multidisciplinary artist and composer, living and working in New York.<span>&nbsp;</span></p>\n<p>My piece, titled <em>Moving Towards Synchrony, version 3, </em>is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.</p>\n<p>It investigates the neurological effects of modulating brain waves and their corresponding physiological effects by use of a Brain-Computer Music Interface, which allows for the sonification of the data captured by an electroencephalogram.</p>\n<p>The work presents an interactive computer-assisted compositional performance system that can teach participants how to influence a positive change in their own physiology by learning to influence the functions of the autonomic nervous system through neuro- and bidirectional feedback.<span>&nbsp;</span></p>\n<p>The methodology involves collecting physiological data through non invasive neuroimaging. A subject&rsquo;s brainwaves are used to generate realtime interactive music compositions which are simultaneously experienced by that subject. The melodic and rhythmic content, are derived from, and constantly influenced by, the subject&rsquo;s EEG readings. A subject, focusing on the generative stimuli, will attempt to elicit a change in their physiological systems through their experience of the bidirectional feedback. The resulting physiological responses will be recorded and measured to determine the efficacy of using external stimuli to affect the human body both physiologically and psychologically.<br><br>EEG brainwave data has shown high levels of success in classifying mental states [1], which affect &ldquo;autonomic modulation of the cardiovascular system&rdquo; [2], and there are existent studies investigating how music can influence a response in the autonomic nervous system. [3] It is with these phenomena in mind that this work was created.<span>&nbsp;</span></p>\n<p>Increased activity in the alpha wave frequency range is &ldquo;usually associated with alert relaxation&rdquo;. [4] Methods intended to increase activity in the alpha wave frequency range through feedback, autogenic meditation, breathing exercises, and other techniques, is called alpha training.</p>\n<p>Positive changes in alpha is what I am primarily concerned with here, since research has shown that stimulating activity within alpha causes muscle relaxation, pain reduction, breathing rate regulation, and decreased heart rate. [4] [5] [6] It has also been used for reducing stress, anxiety and depression, and can encourage memory improvements, mental performance, and aid in the treatment of brain injuries.</p>\n<p>In addition to investigating these neuroscience concerns, this work is designed to explore the validity of using the scientific method as an artistic process. The methodology will be to create an evidence-based system for the purpose of developing research based projects. This will limit, initially, subjective interpretation of the work and will encourage a mindful and intentional interaction with the experience itself. What is learned will determine the value of the work.</p>\n<p>As Gita Sarabhai expressed to John Cage \"...music conditions one's mind, leading to &lsquo;moments in [one's] life that are complete and fulfilled&rsquo;&rdquo; [5]. Music, in this case, can also be used by the mind to condition one's body.</p>\n<p>&nbsp;</p>\n<p><strong>Information on EEG:</strong></p>\n<p>An electroencephalogram (also know as an EEG) is an electrophysiological monitoring method used to record the electrical activity of the brain. A typical adult human EEG signal is between 10 and 100 &micro;V (microvolts) in amplitude when measured from the scalp. It was invented by German psychiatrist Hans Berger in 1929 and research into how brainwaves can be interpreted and modulated started as shortly thereafter.<span>&nbsp; </span>Using an EEG, you are able to directly measure neural activity and capture cognitive processes in real time. Berger proved that alpha waves (also know as Berger waves) were generated by cerebral cortical neurons.</p>\n<p>In 1934, English physiologists Edgar Adrian and Brain Matthews first described the sonification of alpha waves derived from EEG data. [8] They found that &ldquo;non-visual activities which demand the entire attention (e.g. mental arithmetic) abolish the waves; sensory stimulation which demand attention also do so&rdquo; [9], showing how concentration and thought processes affected activity in the alpha wave frequency range.</p>\n<p>The brain wave activity recorded in an EEG is a summation of the inhibitory and excitatory post synaptic potentials that occur across a neuronal membrane. [10]</p>\n<p>The measurements are taken by way of electrodes placed on the scalp.<span>&nbsp; </span>The readings are&nbsp;divided into five frequency bands, delineating slow, moderate, and fast waves.<span>&nbsp; </span>The bands, from slowest to fastest are:</p>\n<p>&nbsp;</p>\n<p><strong>Delta</strong>, with a range from approximately 0.5Hz&ndash;4Hz,<span>&nbsp;</span></p>\n<p>which signifies deepest meditation or dreamless sleep</p>\n<p><strong>Theta</strong>, from approximately 4Hz&ndash;8Hz,<span>&nbsp;</span></p>\n<p>signifying meditation or deep sleep.<span>&nbsp;</span></p>\n<p><strong>Alpha</strong>, from approximately 8Hz&ndash;13Hz,<span>&nbsp;</span></p>\n<p>representing quietly flowing thoughts.</p>\n<p><strong>Beta</strong>, from approximately 13Hz&ndash;30Hz,<span>&nbsp;</span></p>\n<p>which is a normal waking state.</p>\n<p>And<span>&nbsp;</span></p>\n<p><strong>Gamma</strong>, from approximately 30Hz&ndash;42Hz<span>&nbsp;</span></p>\n<p>which is most active during simultaneous processing of information that engages multiple different areas of the brain.</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>History of EEG use in music:</strong></p>\n<p>Physicist Edmond Dewan began the study of brainwaves in the early 1960s and developed a &lsquo;brainwave control system&rsquo;.<span>&nbsp; </span>The system detected changes in alpha rhythms which were used to turn lighting on or off. &ldquo;The light could also be replaced by &lsquo;an audible device that made a beep when switched on&rsquo;, allowing Dewan to spell out the phrase &lsquo; <em>I can talk</em> &rsquo; in Morse code&rdquo;. [8] Dewan met experimental composer Alvin Lucier which inspired the first actual brainwave composition.</p>\n<p>Alvin Lucier first performed <em>Music For Solo Performer</em> in 1965. It involved the composer sitting in a chair on stage, with his eyes closed while his brainwaves were recorded.<span>&nbsp; </span>The data from the recording was amplified and distributed to speakers set up around the room.<span>&nbsp; </span>The speakers were placed against different types of percussion instruments, so the vibration of the speakers would cause the instrument to sound.</p>\n<p>Lucier was able to control the percussion events through control of his cognitive functions, and found that a break in concentration would disrupt that control.<span>&nbsp; </span>Although mastery over the alpha rhythm was (and is) difficult, <em>Music for the solo performer</em> greatly contributed to the field of experimental music and illustrated the depth of possibility in using EEG control over musical performance.</p>\n<p>Computer scientist Jaques Vidal published the paper <em>Toward Direct Brain-Computer Communication </em>in 1973, which first proposed the Brain-Computer Interface (BCI), which is a means of using the brain to control external devices.<span>&nbsp;</span></p>\n<p>This was the very beginning of BCMI research, which has evolved into an interdisciplinary field of study &ldquo;at the crossroads of music, science and biomedical engineering&rdquo; [11]. BCMIs (also referred to Brain Machine Interfaces, or BMIs) are still in use today, and the field of research around them is in its infancy.</p>\n<p>&nbsp;</p>\n<p><strong>Project Overview:</strong></p>\n<p>This project records EEG signals from the subject using four non-invasive dry extra-cranial electrodes from a commercially available MUSE EEG headband. Measurements are recorded from the TP9, AF7, AF8, and TP10 electrodes, as specified by the International Standard EEG placement system, and the data is converted to absolute band powers, based on the logarithm of the Power Spectral Density (PSD) of the EEG data for each channel. Heart rate data is obtained through PPG measurements, although that data is not used in the current version of this project. EEG measurements are recorded in Bels/Db to determine the PSD within each of the frequency ranges.</p>\n<p>The EEG readings are translated into music in real time, and the subjects are instructed to employ deep breathing exercises while they focus on the musical feedback. <br><br>Great care was taken in defining the compositional strategies of the interactive content in order to deliver a truly generative composition that was also capable of producing musically recognizable results.<span>&nbsp;</span></p>\n<p>All permutations of the scales, modes and chords being used, as well as rhythms, and performance characteristics, needed to be considered beforehand so the extraction of a finite set of parameters from the EEG data set could be parsed and used to produce a well-formed and dynamic piece of music.</p>\n<p>There are 3 main sections of this Max patch:</p>\n<p>1: The <strong>EEG data capture</strong> section.</p>\n<p>2: The <strong>EEG data conversion</strong> section.</p>\n<p>3: the<strong> Sound generation and DSP</strong> section.</p>\n<p>The <strong>EEG data capture</strong> section receives EEG data from the Muse headband, which is converted to OSC data and transmitted over WiFi via the iOS app Mind Monitor.<span>&nbsp; </span>That data is then split into the five separate brainwave frequency bandwidths: delta, theta, alpha, beta and gamma.<span>&nbsp; </span>Additional data is also captured, including accelerometer, gyroscope, blink and jaw clench, in order to control for any artifacts in the data capture.<span>&nbsp; </span>Sensor connection data is used to visualize the integrity of the sensor&rsquo;s attachment to the subject. PPG data is also captured for use in a future iteration of the project.</p>\n<p>The <strong>EEG data conversion</strong> section accepts the EEG bandwidth data representing specific event-related potential, and translates it to musical events.<span>&nbsp;</span></p>\n<p>First, significant thresholds for each brainwave frequency bandwidth are defined.<span>&nbsp; </span>These are chosen based on average EEG measurements taken prior to the use of the musical feedback. When those thresholds are reached or exceeded, an event is triggered.<span>&nbsp; </span>Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics.</p>\n<p>&nbsp;</p>\n<p>This section is comprised of three subsections that format their data output differently, depending on the use case: <br>1. <strong>Internal Sound Generation and DSP</strong> for use completely within the Max environment.</p>\n<p>2. <strong>External MIDI</strong> for use with MIDI equipped hardware or software.</p>\n<p>and<span>&nbsp;</span></p>\n<p>3. <strong>External Frequency</strong> <strong>and gate</strong>, for use with modular synthesizer hardware.</p>\n<p>Each of these can be used separately or simultaneously, depending on the needs of the piece.<span>&nbsp;</span></p>\n<p>For the data conversion, the event-related potentials are mapped in the following way:<br>Changes in <strong>alpha</strong>, relative to the predefined threshold, govern the triggering of notes, as well as the scale and mode.</p>\n<p>Changes in <strong>theta</strong>, relative to the threshold, influence note value.<span>&nbsp;</span></p>\n<p>Changes in <strong>beta</strong>, relative to the threshold, influence spatial qualities like reverberation and delay.</p>\n<p>Changes in <strong>delta</strong>, relative to the threshold, influence the degree of spatial effects.</p>\n<p>Changes in <strong>gamma</strong>, relative to the threshold, influence timbre.</p>\n<p>Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis or set of standards.</p>\n<p>The third section is <strong>Sound generation and DSP</strong>. It is responsible for the sonification of the data translated from the <strong>EEG data conversion</strong> section. This section includes synthesis models, timbre characteristics, and spatial effects.</p>\n<p>This projects uses three synthesized voices created in Max 8 for the generative musical feedback.<span>&nbsp; </span>There are two subtractive voices that each use a mix of sine, sawtooth and triangle waves, and one fm voice. <span>&nbsp;</span></p>\n<p>The timbral effects employed are waveform mixing, frequency modulation, and high pass, band pass and low pass filters. The spatial effects used include reverberation, and delay.<span>&nbsp; </span>In addition to the initial settings of the voices, each of the timbral and spatial effects are modulated by separate event-related potential data captured by the EEG.</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>Conclusions:</strong></p>\n<p>&nbsp;</p>\n<p>This project is a contemporary interpretation of an idea I've been interested in for many years, starting with investigation into bidirectional EKG biofeedback.<span>&nbsp;</span></p>\n<p>My initial experience with the subject was during a university degree in psychophysics (a branch of psychology). Some promising research at the university focused on reducing stress in asthmatic subjects for the purposes of lessening the frequency of attacks. [12]</p>\n<p>At the time, the technology required to explore this idea was of considerable size, and prohibitively expensive, for all but medical or formally funded academic purposes. With the current availability of low-cost electroencephalography (EEG) devices and heart rate monitors, the possibility of autonomous exploration of these concepts has become a reality.</p>\n<p>The procedure, when using this work for the exploration of the physiological effects of neuro- and bi-directional feedback, starts with obtaining and comparing 2 data sets: a control and a therapeutic data set.<span>&nbsp; </span>The control set records EEG data without utilizing musical feedback or breathing exercises.<span>&nbsp; </span>The therapeutic set records EEG data with the feedback and breathing exercises.</p>\n<p>&nbsp;</p>\n<p>Although this project is primarily concerned with changes in the alpha EEG brainwave frequency range, changes in other frequency ranges were used to trigger events in the feedback. This approach was adopted to ensure that a subject&rsquo;s loss of focus (and/or a drop in the PSD of alpha) would not negatively affect the generation of novel musical feedback, and with the help of consistent feedback, the subject would be able to return their focus and continue. Depending on the subject&rsquo;s state of relaxation (and the PSD of the other four EEG frequency ranges measured), the performance and phrasing of the musical feedback would change in such a way as to encourage greater focus.</p>\n<p>For the initial proof of concept trials, I tested myself and a small sampling of other subjects. Preliminary data shows that alpha readings were higher, on average, during the therapeutic phase.<span>&nbsp; </span>Also, a higher overall peak value was achieved during the therapeutic phase This suggests that this feedback model is an effective way of increasing activity in the alpha brainwave frequency range, which is the beneficial physiological and psychological effect I was hoping to find, although much more data needs to be collected before any definitive conclusions can be drawn. At this point, the system has been tested and is functional, and further research can begin. The modular design of the work allows for most any variable to be included or excluded, which will be necessary moving forward with the research, in order to more thoroughly test the foundational elements of the thesis, as well as any musicological exploration and analysis that defining the feedback raises.<span>&nbsp; </span><br><br>In the meantime, I am already using the software as a compositional system to create recorded works and live soundtracks. I am also planning to mount the project as an interactive installation in a gallery setting.</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>Contact Details:</strong></p>\n<p>&nbsp;</p>\n<p>Johnny Tomasiello<br><br><a href=\"mailto:johnnytomasiello@gmail.com\">johnnytomasiello@gmail.com</a><br><br></p>\n<p>&nbsp;</p>\n<p><strong>Credits &amp; Acknowledgments:</strong></p>\n<p>IRCAM</p>\n<p>Cycling &rsquo;74</p>\n<p>Carol Parkinson, Executive Director of Harvestworks</p>\n<p>Melody Loveless, NYU &amp; Max certified trainer</p>\n<p>Dr. Paul M. Lehrer and Dr. Richard Carr</p>\n<p>InteraXon Muse electroencephalography headband<span>&nbsp;</span></p>\n<p>James Clutterbuck (Mind Monitor developer)</p>\n<p>&nbsp;</p>\n<p><strong>References:</strong></p>\n<p>&nbsp;</p>\n<p><strong>[1] &ldquo;Mental Emotional Sentiment Classification with an EEG-based Brain-Machine Interface.&rdquo;<span>&nbsp;</span></strong></p>\n<p>Bird, Jordan J.; Ekart, Aniko; Buckingham, Christopher D.; Faria, Diego R., 2019</p>\n<p>&nbsp;</p>\n<p><strong>[2] &ldquo;Effects of mental state on heart rate and blood pressure variability in men and women.&rdquo;<span>&nbsp;</span></strong></p>\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Madden+K&amp;cauthor_id=8590551\">K Madden</a>&nbsp;,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Savard+GK&amp;cauthor_id=8590551\">G K Savard</a>, 1995</p>\n<p>&nbsp;</p>\n<p>&nbsp;</p>\n<p><strong>[3] &ldquo;How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness?&rdquo;<span>&nbsp;</span></strong></p>\n<p>Francesco Riganello,* Maria D. Cortese, Francesco Arcuri, Maria Quintieri, and Giuliano Dolce, 2015</p>\n<p>&nbsp;</p>\n<p><strong>[4] Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications</strong></p>\n<p><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Marzbani%20H%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Hengameh Marzbani</strong></a><strong>, </strong><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Marateb%20HR%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Hamid Reza Marateb</strong></a><strong>,</strong> <strong>and </strong><a href=\"https://www.ncbi.nlm.nih.gov/pubmed/?term=Mansourian%20M%5BAuthor%5D&amp;cauthor=true&amp;cauthor_uid=27303609\"><strong>Marjan Mansourian</strong></a><strong>,</strong><strong> 2016</strong></p>\n<p>&nbsp;</p>\n<p><strong>[5] Stress Management Techniques: Are They All Equivalent, or Do They Have Specific Effects?</strong></p>\n<p>Paul M. Lehrer and Richard Carr, 1994</p>\n<p>&nbsp;</p>\n<p><strong>[6] Alpha activity and cardiac correlates: three types of relationships during nocturnal sleep</strong></p>\n<p><a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Ehrhart+J&amp;cauthor_id=10802467\">J Ehrhart</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Toussaint+M&amp;cauthor_id=10802467\">M Toussaint</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Simon+C&amp;cauthor_id=10802467\">C Simon</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Gronfier+C&amp;cauthor_id=10802467\">C Gronfier</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Luthringer+R&amp;cauthor_id=10802467\">R Luthringer</a>,&nbsp;<a href=\"https://pubmed.ncbi.nlm.nih.gov/?term=Brandenberger+G&amp;cauthor_id=10802467\">G Brandenberger</a>, 2000</p>\n<p>&nbsp;</p>\n<p><strong>[7] &ldquo;A Composer's Confessions\"<span>&nbsp;</span></strong></p>\n<p>John Cage, 1948<span>&nbsp;</span></p>\n<p>&nbsp;</p>\n<p><strong>[8] Brainwaves in concert: the 20th century sonification of the electroencephalogram<br></strong>Bart Lutters, Peter J. Koehler, 2016<span>&nbsp;</span></p>\n<p>&nbsp;</p>\n<p><strong>[9] The Berger Rhythm: Potential Changes From The Occipital Lobes in Man,<span>&nbsp;</span></strong></p>\n<p>Adrian, Matthews.1934</p>\n<p>&nbsp;</p>\n<p><strong>[10] How To Interpret an EEG and its Report</strong></p>\n<p>Marie Atkinson, MD, 2010</p>\n<p>&nbsp;</p>\n<p><strong>[8] Brainwaves in concert: the 20th century sonification of the electroencephalogram<br></strong>Bart Lutters, Peter J. Koehler, 2016</p>\n<p>&nbsp;</p>\n<p><strong>[11] Brain-Computer Music Interfacing: Interdisciplinary Research at the Crossroads of Music, Science and Biomedical Engineering<br></strong>Miranda, ER 2014</p>\n<p>&nbsp;</p>\n<p><strong>[12] Relaxation and Music Therapies for Asthma Among Patients Prestabilized on Asthma Medication</strong></p>\n<p>Paul Lehrer, Et al. 1994</p>",
        "topics": [],
        "user": {
            "pk": 18362,
            "forum_user": {
                "id": 18355,
                "user": 18362,
                "first_name": "Johnny",
                "last_name": "Tomasiello",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4b62dafc53dcbf42b1b50f617668de0a?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-02-13T13:18:35.802851+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "Johnny_Tomasiello",
            "first_name": "Johnny",
            "last_name": "Tomasiello",
            "bookmarks": []
        },
        "slug": "moving-towards-synchrony-2",
        "pk": 1132,
        "published": false,
        "publish_date": "2022-03-21T12:54:45.839130+01:00"
    },
    {
        "title": "Musique en ligne avec TopyWeb",
        "description": "TopyWeb participe et est actif dans le monde de la musique, mais pas que... surtout la musique en ligne en décidant de mettre en place un top 10 qui regroupe les meilleurs sites de musique en ligne actuellement en France et dans le monde.",
        "content": "<p>TopyWeb fait d&eacute;sormais parti du monde artistique, en particulier celui de l'audiovisuel. En ayant particip&eacute; &agrave; de nombreuses reprises &agrave; des &eacute;v&eacute;nements, des soir&eacute;es organis&eacute;es &agrave; th&egrave;mes ou encore &agrave; des festivals et financ&eacute; certains projets, on peut admettre que TopyWeb met une pierre &agrave; l'&eacute;difice dans le monde culturel.</p>\n<p>Il a donc &eacute;t&eacute; tout naturellement d&eacute;cid&eacute; de mettre en place un top 10 des meilleurs sites de musique en ligne pour salut le monde des artistes qui en plus de ceci dans des passages compliqu&eacute; de vie comme actuellement, p&eacute;riode de guerre et de virus, le monde du spectable a bien besoin d'un coup de pouce. C'est pourquoi, TopyWeb a fabriqu&eacute; en leurs honneurs ce classement qui regroupe les sites li&eacute;s &agrave; la musique en ligne, en quelques clics vous aurez la possibitli&eacute; de trouver le site qui vous conviendra.</p>\n<p>Pour d&eacute;couvrir le site il faut <a href=\"https://www.topyweb.com/divertissement/top-sites-musique-en-ligne.php\" title=\"musique en ligne\">cliquez ici</a> !</p>\n<p>Et par la m&ecirc;me occasion, indirectement particit&eacute; &agrave; votre tour &agrave; prendre un abonnement sur l'un d'entre eux pour les aider &agrave; tenir le cap. En faisant une pierre deux coups, vous aller &eacute;galement participer &agrave; la vie artistique et grandiose qu'est le monde du divertissement. Merci d'avoir lu cet article, prenez soins de vous et que la vie vous am&egrave;ne l&agrave; o&ugrave; vous le souhaitez, c'est la chose la plus importante, la sant&eacute; et l'amour... et la musique en ligne bien entendu. Enjoy !</p>",
        "topics": [
            {
                "id": 782,
                "name": "Musique en ligne",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 28914,
            "forum_user": {
                "id": 28886,
                "user": 28914,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/f71be51e5937ffe9829b45912b9ebad8?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "topyweb",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "musique-en-ligne-avec-topyweb",
        "pk": 1151,
        "published": false,
        "publish_date": "2022-05-09T02:17:06.821732+02:00"
    },
    {
        "title": "Dalla prosodia alla musica strumentale: una sfida compositiva",
        "description": "in Lorenzo Cardilli, Stefano Lombardi Vallauri (a cura di),\nL’arte orale. Poesia, musica, performance. Accademia University Press, Torino 2020. pp.225-244.\n\nAvailable at https://www.aaccademia.it/scheda-libro?aaref=1425\n",
        "content": "<p>Abstract<br />Composers of any age have been aware of the communicative and persuasive power of prosody. However, its compositional implementation has always been challenging: the more the speaker has to follow strict intonational contours or cadences, the less authentic and natural her/his voice will sound.<br />Another approach that we will outline here has turned this limit into a resource: instead of bending the qualitative aspects of speech to the creative will of the composer, the musical material is mold on the structures of pre-existing verbal expressions. Most of the time, this is achieved by translating each feature of the &lsquo;speech-melody&rsquo; into the corresponding musical parameter.<br />The challenge encountered by the composer is not limited to the extraction and manipulation of prosodic information. It is equally about the understanding of the referencing process through which the listener encodes that piece of information.<br />Several questions emerge from these observations. This paper focuses on three of them: Which are the cognitive mechanisms underlying sonic references in general and prosodic ones in particular? How are prosodic references exploited in today&rsquo;s instrumental music? Is the referential status of speech distinct as compared to other well-recognized sounds?</p>",
        "topics": [
            {
                "id": 335,
                "name": "Instrumental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 571,
                "name": "Prosody",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 557,
                "name": "Speech",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 777,
            "forum_user": {
                "id": 777,
                "user": 777,
                "first_name": "Fabio",
                "last_name": "Cifariello Ciardi",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/b4d85d0aa03337677e97084a18abe800?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-01-12T12:46:05.083432+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "FabioCC",
            "first_name": "Fabio",
            "last_name": "Cifariello Ciardi",
            "bookmarks": []
        },
        "slug": "dalla-prosodia-alla-musica-strumentale-una-sfida-compositiva",
        "pk": 942,
        "published": false,
        "publish_date": "2021-03-20T13:43:01.355757+01:00"
    },
    {
        "title": "Tutoriel Modalys n°10 (Epilogue) : The Reverb of the Jedi",
        "description": "Dixième et dernière partie de ma série de tutoriels sur l'utilisation de Modalys et de ses bibliothèques. Celui-ci se concentre uniquement sur Max.",
        "content": "<p style=\"text-align: justify;\"><strong>Dans ce tutoriel, nous construisons une r&eacute;verb&eacute;ration &agrave; plaques dans Max avec une plaque rectangulaire et la connexion de force.</strong></p>\r\n<p style=\"text-align: justify;\"></p>\r\n<p style=\"text-align: justify;\">C'est pour l'instant mon dernier tutoriel. Une sorte d'&eacute;pilogue. Ce fut un voyage stimulant et amusant. Modalys a un son tout &agrave; fait unique en son genre. D'une certaine mani&egrave;re, il ne sonne pas vraiment num&eacute;rique, mais pas analogique non plus. Quel terrain parfait pour faire une r&eacute;verb&eacute;ration de plaque dans Max avec les externes de Modalys. En utilisant une plaque rectangulaire qui est excit&eacute;e par la force... de la connexion ;-). Cela nous donne &eacute;galement l'occasion d'examiner les entr&eacute;es de signal pour l'objet modalys~.</p>\r\n<h6></h6>\r\n<p style=\"text-align: center;\"><iframe width=\"560\" height=\"315\" src=\"//www.youtube.com/embed/TtLZV76iT_g\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<h6></h6>\r\n<p>Merci beaucoup &agrave; tous ceux qui nous ont aid&eacute;s, en nous faisant part de leurs commentaires, en les publiant, etc. J'ai aussi beaucoup appris et bien que Modalys ait encore quelques d&eacute;fauts (documentation parfois incompl&egrave;te, utilisation importante du CPU ou comportements parfois instables) j'esp&egrave;re vraiment que l'Ircam continuera &agrave; d&eacute;velopper et &agrave; travailler sur Modalys. Et qui sait... je serais immens&eacute;ment fier si ces tutoriels contribuaient &agrave; cet effort de quelque mani&egrave;re que ce soit.</p>\r\n<p>Tous mes v&oelig;ux &agrave; vous tous</p>\r\n<p>Olav</p>\r\n<p></p>\r\n<p><strong>Cette s&eacute;rie de tutoriels a &eacute;t&eacute; r&eacute;alis&eacute; par Olav Lervik.&nbsp;</strong></p>\r\n<p><strong>Retrouvez tous les tutoriels&nbsp;<a href=\"https://forum.ircam.fr/collections/detail/tutoriels/\">ici.</a></strong></p>",
        "topics": [
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 348,
                "name": "Max externals",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 211,
                "name": "Modalys",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 269,
                "name": "Physical modeling engine",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 403,
                "name": "Reverberation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 4009,
            "forum_user": {
                "id": 4007,
                "user": 4009,
                "first_name": "Olav",
                "last_name": "Lervik",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/ee93de9099f8260f93b1c0771f90f8cc?s=120&d=retro",
                "biography": null,
                "date_modified": "2026-01-23T10:46:15.595821+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "maestrorulez",
            "first_name": "Olav",
            "last_name": "Lervik",
            "bookmarks": []
        },
        "slug": "modalys-tutoriel-n10-epilogue-the-reverb-of-the-jedi",
        "pk": 732,
        "published": true,
        "publish_date": "2020-11-24T10:00:00+01:00"
    },
    {
        "title": "SPAT Devices par Music Unit",
        "description": "Music Unit produit la collection SPAT, plugins Max For Live distribués par Ableton.",
        "content": "<p>Les plugins <a href=\"https://www.ableton.com/fr/packs/spat-bundle/\">SPAT</a>&nbsp;permettent d'agencer et d&eacute;placer des sources sonores dans des espaces audio r&eacute;els ou virtuels, en 2D ou 3D, gr&acirc;ce &agrave; des moteurs de spatialisation avanc&eacute;s, bas&eacute;s sur le processeur Spatialisateur d&eacute;velopp&eacute; &agrave;&nbsp;<a href=\"https://www.ircam.fr/\">l'IRCAM</a>&nbsp;depuis bient&ocirc;t trois d&eacute;cennies. <br><br><img alt=\"\" src=\"/media/uploads/user/6a7fac8a99cd6b475c4b13dc4c01c997.png\"><br><br>Les plugins sont propos&eacute;s en deux packs : SPAT Multichannel et SPAT Stereo.<br><br>SPAT Multichannel est destin&eacute; aux artistes, producteurs et ing&eacute;nieurs du son qui souhaitent tirer le meilleur parti de la configuration multicanale de leur studio ou salle de concert.<br><br>SPAT Stereo est destin&eacute; &agrave; celles et ceux qui disposent de configurations st&eacute;r&eacute;o simples (haut-parleurs, casque audio) et qui souhaitent tout de m&ecirc;me int&eacute;grer des techniques de spatialisation de haut niveau dans leurs productions.<br><br>SPAT Devices est d&eacute;velopp&eacute; par&nbsp;<a href=\"http://www.musicunit.fr/music-unit-fr/manuel-poletti\">Manuel Poletti</a>&nbsp;du studio&nbsp;<a href=\"http://www.musicunit.fr/musicunit-fr\">Music Unit</a>, en utilisant la biblioth&egrave;que SPAT Max d&eacute;velopp&eacute;e par l'&eacute;quipe&nbsp;<a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac\">Espaces Acoustiques et Cognitifs</a>&nbsp;- STMS (Ircam, CNRS, Sorbonne Universit&eacute;, Minist&egrave;re de la culture) et diffus&eacute;e par&nbsp;<a href=\"https://ircamamplify.com/\">Ircam Amplify</a>.<br><br><br><img alt=\"SPAT devices in action\" src=\"/media/uploads/user/652b3ea2f2ea8143749c0a25bb4e4fa1.png\"><br><br></p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 23,
            "forum_user": {
                "id": 23,
                "user": 23,
                "first_name": "Manuel",
                "last_name": "Poletti",
                "avatar": "https://forum.ircam.fr/media/avatars/PortraitMU_Manuel_Poletti.jpeg",
                "avatar_url": "/media/cache/25/a9/25a94fa5eedfb0e20cf188183156a531.jpg",
                "biography": "Sound artist and composer, computer music designer at IRCAM and consultant at Cycling'74, Manuel Poletti is in charge within Music Unit of the development of large format sound installation projects and software technologies dedicated in particular to augmented instrument, computer-assisted composition and sound spatialization. Manuel collaborates regularly with many leading contemporary artists with whom he creates elaborate sound systems and content in the fields of stage, art, design and architecture.",
                "date_modified": "2026-02-05T12:39:13.481208+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 4,
                        "forum_user": 23,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "poletti",
            "first_name": "Manuel",
            "last_name": "Poletti",
            "bookmarks": []
        },
        "slug": "spat-devices-par-music-unit-1",
        "pk": 2049,
        "published": false,
        "publish_date": "2023-02-09T10:22:13.898773+01:00"
    },
    {
        "title": "Keynote Envisioning a Future Music and Audio Metaverse - Jean-Marc JOT",
        "description": "Presentation during the Ircam Forum Workshop 2023 In Paris",
        "content": "<p>During the thirty years since the development of Ircam&rsquo;s Spat began, professional and consumer audio technology has progressed along several parallel threads &ndash; including sensory immersion, electronic transmission, content formats and creation tools. In a not-too-distant future, the authoring, consumption or performance of a significant portion of our media and music experiences might leverage a global set of frameworks and ecosystems often referred to as the Metaverse. In time, more and more of our media experiences (currently categorized into separate content industries such as music, movies and podcasts) may be cloud-based, navigable, non-destructive, ubiquitous, interoperable and adaptable to listener conditions. In this talk, we attempt to distill elements of this vision and some of the challenges that it entails, including the adoption of a common spatial audio rendering description model, and &ldquo;externalized&rdquo; binaural audio reproduction for AR/VR sound.</p>",
        "topics": [],
        "user": {
            "pk": 20758,
            "forum_user": {
                "id": 20749,
                "user": 20758,
                "first_name": "Jean-Marc",
                "last_name": "Jot",
                "avatar": "https://forum.ircam.fr/media/avatars/jmj_2023b_whitebg.png",
                "avatar_url": "/media/cache/43/5c/435c8591db0f56f21cc34332821b283a.jpg",
                "biography": "Globally recognized audio technology innovator in consumer electronics and pro markets, currently focusing more particularly on immersive audio, hearing personalization and music technology innovation.  I founded Virtuel Works to help accelerate the development and deployment of audio, voice and music computing technologies that will power immersive experiences.  Previously, I initiated and drove the development of novel sound processing technologies, platforms and standards for virtual and augmented reality, gaming, broadcast, cinema, and music creation - with Magic Leap, Creative Labs, DTS / Xperi, and iZotope / Native Instruments.  Before relocating to California in the late 90s, I conducted research at IRCAM in Paris, where I created the Spat software library for immersive music creation and performance.  Fellow of the Audio Engineering Society, regular speaker in industry and academic events.  Authored numerous publications and patents on digital audio signal processing.",
                "date_modified": "2025-04-16T18:29:13.648099+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jmjot",
            "first_name": "Jean-Marc",
            "last_name": "Jot",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3392,
                    "user": 20758,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "envisioning-a-future-music-and-audio-metaverse",
        "pk": 2062,
        "published": true,
        "publish_date": "2023-02-14T17:13:19+01:00"
    },
    {
        "title": "A System for the Synchronous Emergence of Music Derived from Movement",
        "description": "A System for the Synchronous Emergence of Music Derived from Movement is an immersive audio and visual work whose purpose is to define and explore a relationship between the movement of an artist’s hand (brush or pen, etc.) and a generative interactive computer-assisted compositional and performance system that is directly informed by those movements in real-time.",
        "content": "<p><strong>Introduction</strong></p>\r\n<p><em>A System for the</em> <em>Synchronous</em> <em>Emergence of Music Derived from Movement </em>is an immersive audio and visual work whose purpose is to explore an intentional relationship between the movement of an artist&rsquo;s hand (brush or pen, etc.) and a generative interactive computer-assisted compositional and performance system that is directly informed by those movements, in real-time.</p>\r\n<p>&nbsp;<img src=\"/media/uploads/tomasiello-modular_01b.png\" alt=\"\" width=\"1017\" height=\"879\" /></p>\r\n<p><strong></strong></p>\r\n<p><strong>History</strong></p>\r\n<p>This project was initially designed for an artist who creates kinetic visual artworks (digital paintings) in a live setting, in collaboration with performing musicians. At first, the idea was to offer the artist a self contained generative music system, granting them independence as an element of the work, while allowing complete focus on their visual practice.</p>\r\n<p>The concept evolved to incorporate how a visual artist&rsquo;s process was, or could be, influenced by musical feedback, and how a reciprocally responsive real-time generative system could affect the outcome of a visual piece, made by any user.</p>\r\n<p>This work builds on the experience I gained with my previous project <em>Moving Towards Synchrony.<span>&nbsp; </span></em>That is another immersive work, whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.<span>&nbsp; </span>I invite you all to see the presentation from last year&rsquo;s IRCAM forum on the work.</p>\r\n<p><em>Moving Towards Synchrony</em> investigates the neurological effects of modulating brain waves and their corresponding physiological effects through use of a Brain-Computer Music Interface, which allows for the sonification of the data captured by an electroencephalogram which is translated into musical stimuli in real time.</p>\r\n<p>The work explores the validity of using the scientific method as an artistic process. The methodology is to create an evidence-based system for the purpose of further developing research based projects.<span>&nbsp; </span>It focuses on quantifying measurable and repeatable physiological and psychological changes in a user using a nurofeedback loop, requiring that user to concentrate on the stimuli to the exclusion of any other activity or action.</p>\r\n<p>This system, in contrast, is concerned with staying in the process, and demands active mental and physical engagement from the user, who is influencing, and responding to, the external stimuli that has been defined by the fundamental physical gestures already in use in the visual arts practice. What is being investigated is how the choices of a visual artist may be influenced by a generative music system that is based on their physical movements, and to what extent the artist will allow that.<span>&nbsp; </span>Conversely, it is also possible that the user may choose not to let the feedback influence their movements, or any combination of those possibilities, at any time, during the work.<span>&nbsp; </span>It emphasizes the personal intuitive rules and decisions used while making improvisational choices, bridging that gap between the purely scientific focus of the previous project and the art practice of the user that is emphasized in this piece. It is instinctual and malleable versus quantifiable and physiological.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Method</strong><br /><br />The project employs wearable gestural technologies, collecting and translating movement data through a non invasive MUGIC sensor, tracking motion on the pitch, yaw, and roll axis. The melodic and rhythmic content are derived from, and constantly influenced by, the user&rsquo;s movements. The musical performance and scales are directly influenced by hand orientation and movement.</p>\r\n<p>The data are used to generate real-time, interactive music compositions, which are simultaneously experienced by the user, influencing choices, and ultimately the final visual works, while also presenting a live immersive audio and visual piece.<span>&nbsp;</span></p>\r\n<p>The data is sonified though a patch built in Max 8, which manages the sound generation and DSP, and most importantly, is where the translation from movement to music is defined.<span>&nbsp;</span></p>\r\n<p>There are 3 main sections of this Max project:</p>\r\n<p>1:<strong> </strong>The <strong>sensor</strong> <strong>data capture</strong> section.</p>\r\n<p>2: The <strong>data conversion</strong> section.</p>\r\n<p>3: the<strong> sound generation and DSP</strong> section.<span>&nbsp;</span></p>\r\n<p>The <strong>sensor</strong> <strong>data capture</strong> section receives movement data from the MUGIC sensor, which sends the information in OSC protocol over a WiFi connection. That data is then split into the three separate sets: yaw, pitch, and roll.</p>\r\n<p>The <strong>data conversion</strong> section accepts the formatted sensor data and translates it to musical events.<span>&nbsp;</span>First, significant thresholds for each movement axis are defined and calibrated for.<span>&nbsp; </span>These are chosen based on average gestural ranges taken prior to the use of the musical feedback. When those thresholds are reached or exceeded, an event is triggered.<span>&nbsp; </span>Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics.<span>&nbsp; </span>The time-base for the musical events can be variable and based on hand movements, or set to a clock. Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis or set of standards.</p>\r\n<p>The third section is <strong>sound generation and DSP</strong>. It is responsible for the sonification of the data translated from the <strong>EEG data conversion</strong> section. This section includes synthesis models, timbral characteristics, and spatial effects.</p>\r\n<p>This projects uses three synthesized voices created in Max 8 for the generative musical feedback.The timbral effects employed are waveform mixing, frequency modulation, and low pass filtering. The spatial effects used include reverberation, and delay.<span>&nbsp; </span>In addition to the initial settings of the voices, each of the timbral effects are modulated by separate event data captured by the wearable sensor.</p>\r\n<p>&nbsp;</p>\r\n<p><strong>Contact Details</strong></p>\r\n<p>Johnny Tomasiello<br /><a href=\"mailto:johnnytomasiello@gmail.com\">johnnytomasiello@gmail.com</a></p>",
        "topics": [
            {
                "id": 636,
                "name": "Generative music",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 277,
                "name": "Max 8",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 102,
                "name": "Movement",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 100,
                "name": "Sensor",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 20945,
            "forum_user": {
                "id": 20934,
                "user": 20945,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Tomasiello-modular_01b.png",
                "avatar_url": "/media/cache/8e/26/8e262109aba7469cf1a5c6158552e9f8.jpg",
                "biography": "Johnny Tomasiello is a multidisciplinary artist and composer-researcher, with a deep interest in expanded conceptualizations of sound, visuals, and time. His work employs methodologies across media, and is informed by research into neuroscience, psychophysics and biofeedback.  \n\nFocused on the relationship between perception and the mechanics of physiology, his immersive works, compositions, and performances reveal otherwise invisible processes in physiological and technological systems. Drawing on custom-built instruments and software, his work references mechanisms of expression and experience through data sonification, biofeedback, and reciprocal physiological systems.\n\nAs a performer, Tomasiello has produced live immersive performances and lectures featuring his interactive computer-assisted compositional performance systems and Brain-Computer Interfaces (BCI) that create, manipulate, and deconstruct audio and visuals, as well as physiological responses. He has lectured on the subject, staged live performances, scored films, and shown canvases and sound works in galleries and at institutions in the US and abroad.",
                "date_modified": "2026-02-12T19:09:20.143419+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "johnnytomasiello",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "a-system-for-the-synchronous-emergence-of-music-derived-from-movement-1",
        "pk": 1191,
        "published": true,
        "publish_date": "2022-07-12T18:48:47+02:00"
    },
    {
        "title": "Endangered Guitar in: Musical Instruments in the 21st Century",
        "description": "The “Endangered Guitar” is a hybrid interactive instrument meant to facilitate live sound processing. The software “listens” to the guitar input, to then determine the parameters of the electronic processing of the same sounds, responding in a flexible way. Since its inception in the year 2000 it has been presented in hundreds of concerts; in 23 different countries on 4 continents; in solo to large ensemble settings; through stereo and multichannel sound systems including Wavefield Synthesis; in collaborative projects with dance, visuals, and theater; and across different musical styles.",
        "content": "<p><a href=\"https://tammen.org/Endangered-Guitar-in-Musical-Instruments-in-the-21st-Century\">https://tammen.org/Endangered-Guitar-in-Musical-Instruments-in-the-21st-Century</a></p>",
        "topics": [
            {
                "id": 850,
                "name": "experimental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 52,
                "name": "Improvisation",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 849,
                "name": "interactive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 206,
                "name": "Interactive real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 16747,
            "forum_user": {
                "id": 16744,
                "user": 16747,
                "first_name": "Hans",
                "last_name": "Tammen",
                "avatar": "https://forum.ircam.fr/media/avatars/Hans_Tammen_joergsteinmetz-medium.jpg",
                "avatar_url": "/media/cache/3b/97/3b976219def8982b587bdd88a7e557a3.jpg",
                "biography": "Hans Tammen likes to set sounds in motion, and then sit back to watch the movements unfold. Using textures, timbre and dynamics as primary elements, his music is continuously shifting, with different layers floating into the foreground while others disappear. His music flows like clockwork, “transforming a sequence of instrumental gestures into a wide territory of semi-hostile discontinuity; percussive, droning, intricately colorful, or simply blowing your socks off” (Touching Extremes).\n\nHis works have been presented at festivals in the US, Canada, Mexico, Russia, Ukraine, India, South Africa, the Middle East and all over Europe. Hans Tammen received grants and composer commissions from NewMusicUSA,  Chamber Music America, MAPFund, Mid-Atlantic Arts Foundation, American Music Center, Lucas Artists Residencies Montalvo, New York State Council On The Arts (NYSCA), New York Foundation For The Arts (NYFA), American Composers Forum w/ Jerome Foundation, Foundation for Contemporary Arts Emergency Funds, New York State Music Fund, Goethe Institute w/ Foreign Affairs Office, among others.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "hanstammen",
            "first_name": "Hans",
            "last_name": "Tammen",
            "bookmarks": []
        },
        "slug": "endangered-guitar-in-musical-instruments-in-the-21st-century",
        "pk": 1228,
        "published": false,
        "publish_date": "2022-08-06T19:41:15.355475+02:00"
    },
    {
        "title": "Soundings in Fathoms",
        "description": "Soundings in Fathoms is an unreleased multimedia photographic slideshow documenting some of the vanishing worlds of Southeast Louisiana. \nCommissioned and performed by the New York New Music Ensemble, accompanied by photos from Luca Hoffmann",
        "content": "<p><a class=\"ytp-share-panel-link ytp-no-contextmenu\" title=\"Share link\" href=\"https://youtu.be/AdVQ1E-lD1s\" target=\"_blank\" rel=\"noopener\" aria-label=\"Share link https://youtu.be/AdVQ1E-lD1s\">https://youtu.be/AdVQ1E-lD1s</a></p>\n<p>&nbsp;</p>\n<p>For purposes of the Paris Forum 2021, please find the full recording linked above.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 14388,
            "forum_user": {
                "id": 14385,
                "user": 14388,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/4c41d061098661a620f425afdeec1c7c?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "bzervigon",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "soundings-in-fathoms",
        "pk": 940,
        "published": false,
        "publish_date": "2021-03-15T19:41:30.117129+01:00"
    },
    {
        "title": "The Body of Cognitions - Anran Yang, Liwei Yin",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris.",
        "content": "<p>This is an immersive video and audio project that presents an embodied story-telling through experiencing the everyday life of a non-human being. The concept derives from the idea of the embodied cognition. In other words, how we think, communicate and remember, is shaped by having a body with specific sensory and motor capacities which together forms memory, emotion, language and all the other aspects of life. This project is exploring how is having a different body with alternate sensory, height and mobility will affect the process of thinking and remembering. The visual part is made by an immersive video either in the form of VR or moving image of footages made from field recordings. The audio part consists of field recorded sound with collage and processing. The whole piece invites the audience to meditate through the &ldquo;eyes&rdquo; and &ldquo;ears&rdquo; of a non human being and enjoys a story-telling from a probably &ldquo;unusual&rdquo; perspective.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 32739,
            "forum_user": {
                "id": 32691,
                "user": 32739,
                "first_name": "A",
                "last_name": "YANG",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/07c2af2ff072a34e13722fec0318010d?s=120&d=retro",
                "biography": "Anran Yang is an artist and designer studying Information experience design at Royal College of Art based in London, UK. Her practice aim to interact with users, in a playful and amusive way, that helps the user gain a better or even a brand new understanding of the world they are involved, the objects they use in daily life and the living environment surrounding them.",
                "date_modified": "2023-03-14T16:02:34+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "anran",
            "first_name": "A",
            "last_name": "YANG",
            "bookmarks": []
        },
        "slug": "the-body-of-cognitions",
        "pk": 2098,
        "published": true,
        "publish_date": "2023-02-28T17:24:59+01:00"
    },
    {
        "title": "Inform and evaluate a public space sound installation through perceptual evaluations, an art x science collaboration. (Niches Acoustiques II)",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;We will present the scientific and artistic collaboration currently implemented within the Perception and Sound Design team at IRCAM-STMS, in the framework of Valerian Fraisse's thesis, with the sound artist Nadine Sch&uuml;tz, composer in research at IRCAM. This collaboration aims to inform and accompany the composition of a perennial sound installation currently being created by Nadine Sch&uuml;tz. This installation, entitled \\&quot;Niches Acoustiques\\&quot;, a winning project of the Participatory Budget of the City of Paris, is dedicated to an urban public space: the square of the new courthouse (Tribunal Judiciaire) of Paris. After a campaign of recordings and measurements allowing us to characterize the existing sound environment of the site, we seek, on the one hand, to inform the composition of this work based on laboratory listening tests and, on the other hand, to evaluate the impact of the installation on the urban soundscape in situ. During the presentation, we will jointly introduce the general framework of this mixed art/science research within a general sound design research approach, expose the methodology and the results of the first experimental phases of the study, and discuss the implications of this collaboration from a scientific and artistic point of views.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">This collaboration aims to inform and accompany the composition of a perennial sound installation currently being created by Nadine Sch&uuml;tz. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;We will present the scientific and artistic collaboration currently implemented within the Perception and Sound Design team at IRCAM-STMS, in the framework of Valerian Fraisse's thesis, with the sound artist Nadine Sch&uuml;tz, composer in research at IRCAM. This collaboration aims to inform and accompany the composition of a perennial sound installation currently being created by Nadine Sch&uuml;tz. This installation, entitled \\&quot;Niches Acoustiques\\&quot;, a winning project of the Participatory Budget of the City of Paris, is dedicated to an urban public space: the square of the new courthouse (Tribunal Judiciaire) of Paris. After a campaign of recordings and measurements allowing us to characterize the existing sound environment of the site, we seek, on the one hand, to inform the composition of this work based on laboratory listening tests and, on the other hand, to evaluate the impact of the installation on the urban soundscape in situ. During the presentation, we will jointly introduce the general framework of this mixed art/science research within a general sound design research approach, expose the methodology and the results of the first experimental phases of the study, and discuss the implications of this collaboration from a scientific and artistic point of views.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">This installation, entitled \"Niches Acoustiques\", a winning project of the Participatory Budget of the City of Paris, is dedicated to an urban public space: the square of the new courthouse (Tribunal Judiciaire) of Paris. After a campaign of recordings and measurements allowing us to characterize the existing sound environment of the site, we seek, on the one hand, to inform the composition of this work based on laboratory listening tests and, on the other hand, to evaluate the impact of the installation on the urban soundscape in situ. </span></p>\r\n<p><span data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;We will present the scientific and artistic collaboration currently implemented within the Perception and Sound Design team at IRCAM-STMS, in the framework of Valerian Fraisse's thesis, with the sound artist Nadine Sch&uuml;tz, composer in research at IRCAM. This collaboration aims to inform and accompany the composition of a perennial sound installation currently being created by Nadine Sch&uuml;tz. This installation, entitled \\&quot;Niches Acoustiques\\&quot;, a winning project of the Participatory Budget of the City of Paris, is dedicated to an urban public space: the square of the new courthouse (Tribunal Judiciaire) of Paris. After a campaign of recordings and measurements allowing us to characterize the existing sound environment of the site, we seek, on the one hand, to inform the composition of this work based on laboratory listening tests and, on the other hand, to evaluate the impact of the installation on the urban soundscape in situ. During the presentation, we will jointly introduce the general framework of this mixed art/science research within a general sound design research approach, expose the methodology and the results of the first experimental phases of the study, and discuss the implications of this collaboration from a scientific and artistic point of views.&quot;}\" data-sheets-userformat=\"{&quot;2&quot;:5119,&quot;3&quot;:{&quot;1&quot;:0},&quot;4&quot;:{&quot;1&quot;:2,&quot;2&quot;:16777215},&quot;5&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;6&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;7&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;8&quot;:{&quot;1&quot;:[{&quot;1&quot;:2,&quot;2&quot;:0,&quot;5&quot;:{&quot;1&quot;:2,&quot;2&quot;:0}},{&quot;1&quot;:0,&quot;2&quot;:0,&quot;3&quot;:3},{&quot;1&quot;:1,&quot;2&quot;:0,&quot;4&quot;:1}]},&quot;9&quot;:0,&quot;10&quot;:0,&quot;11&quot;:4,&quot;12&quot;:0,&quot;15&quot;:&quot;Arial&quot;}\">During the presentation, we will jointly introduce the general framework of this mixed art/science research within a general sound design research approach, expose the methodology and the results of the first experimental phases of the study, and discuss the implications of this collaboration from a scientific and artistic point of views.</span></p>",
        "topics": [],
        "user": {
            "pk": 23160,
            "forum_user": {
                "id": 23142,
                "user": 23160,
                "first_name": "Valerian",
                "last_name": "Fraisse",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/563642d4aa59b73bafa7a6671c45514c?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-12-01T17:08:39.945839+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 357,
                        "forum_user": 23142,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "fraisse",
            "first_name": "Valerian",
            "last_name": "Fraisse",
            "bookmarks": []
        },
        "slug": "inform-and-evaluate-a-public-space-sound-installation-through-perceptual-evaluations-an-art-x-science-collaboration-niches-acoustiques-ii",
        "pk": 1332,
        "published": true,
        "publish_date": "2022-09-13T12:19:13+02:00"
    },
    {
        "title": "Creative Music Technologies for Learning and Play - A residency update",
        "description": "Update for Alex Ruthmann's residency at IRCAM",
        "content": "<p>Since arriving at IRCAM in late January 2020, my time has been spent getting to know a few of the IRCAM research labs, spending time with the amazing archives staff, and meeting folks in the broader Paris community working at the intersection of creative music technologies, learning, and play.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 39,
            "forum_user": {
                "id": 39,
                "user": 39,
                "first_name": "S. Alex",
                "last_name": "Ruthmann",
                "avatar": "https://forum.ircam.fr/media/avatars/alexruthmann_portrait_square_0_1.png",
                "avatar_url": "/media/cache/7e/bf/7ebf2cb69693475cb8c6bb27b234fc62.jpg",
                "biography": "S. Alex Ruthmann is Area Head and Associate Professor of Interactive Media and Business at NYU Shanghai and Associated Professor of Music Education and Music Technology at NYU Steinhardt. He is the Founder/Director of the NYU Music Experience Design Lab (MusEDLab), and core faculty in the Music and Audio Research Lab (MARL). The MusEDLab creative learning and software projects are in active use by over 6.5 million people across the world.\n\nRuthmann recently launched a new research lab focused on sustainable entrepreneurship practices in classical music training programs in collaboration with the New World Symphony. This work is funded by a recent 5-year award from the National Endowment of the Arts. Ruthmann's research portfolio also includes a Norwegian project DigiSus, a participatory design research project focused on the design and development of interactive arts spaces infused with non-screen-based digital technologies for creative play. \n\nRuthmann currently serves as Co-Editor of the International Journal of Music Education and is co-author of the book Scratch Music Projects, an introduction to creative music coding projects in MIT's Scratch programming language for kids.",
                "date_modified": "2024-10-08T11:26:37.742325+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alexruthmann",
            "first_name": "S. Alex",
            "last_name": "Ruthmann",
            "bookmarks": []
        },
        "slug": "creative-music-technologies-for-learning-and-play-a-residency-update",
        "pk": 543,
        "published": false,
        "publish_date": "2020-02-22T10:31:08.197374+01:00"
    },
    {
        "title": "Immersion Sonore, contes sonores en Thiérache",
        "description": "Dans le cadre du contrat culture - ruralité, signé entre la Direction Régionale des Affaires Culturelles des Hauts-de-France, l’Education Nationale et la Communauté de Communes, Les Ateliers des Lutheries Numériques proposent des ateliers autour de la découverte des pratiques artistiques par les musiques électroacoustiques, de janvier à juin 2019, sur le Territoire de la Thiérache du centre.",
        "content": "<p><br />Ce projet a comme support patrimonial les contes et l&eacute;gendes de la Thi&eacute;rache.<br />Ceux-ci ont d&rsquo;abord &eacute;t&eacute;, dans un premier temps, lus par les anciens du Territoire. Ces enregistrements vont ensuite servir de base &agrave; la construction de contes sonores par les &eacute;l&egrave;ves du CE1 aux CM2 dans diff&eacute;rentes &eacute;coles du secteur, incluant un travail p&eacute;dagogique, d&rsquo;&eacute;coute des ph&eacute;nom&egrave;nes sonores et du travail vocal et spatial.</p>\r\n<p>Une restitution sous forme d&rsquo;installation, est pr&eacute;vue &agrave; l&rsquo;issue de trois ateliers de mi juin &agrave; mi juillet 2019 dans deux m&eacute;diath&egrave;ques du territoire de la Thi&eacute;rache du centre.</p>\r\n<h2>LE TERRITOIRE</h2>\r\n<p>La Communaut&eacute; de communes de la Thi&eacute;rache du centre regroupe 68 communes pour une densit&eacute; de 38 hab / km2. Elle est implant&eacute;e sur un territoire tr&egrave;s rural dont les habitants sont &eacute;loign&eacute;s des infrastructures culturelles.\u2028<br />Depuis 2012, elle &oelig;uvre au maillage culturel notamment en mettant en place des ateliers &agrave; destination des scolaires et des habitants.</p>\r\n<h2>LA NAISSANCE DU PROJET</h2>\r\n<p>Le projet &laquo; Immersion sonore, contes sonore sur le territoire de la Thi&eacute;rache du centre &raquo; s&rsquo;inscrit dans une volont&eacute; plus large de rendre accessible la cr&eacute;ation artistique pour les habitants de la communaut&eacute; de communes &eacute;loign&eacute;s des &eacute;tablissements culturels, du spectacle vivant et de la lecture. Ce projet a &eacute;t&eacute; amen&eacute; dans une co-construction avec les services de l&rsquo;Education Nationale et la DRAC Hauts-de-France en respectant les trois piliers de l&rsquo;&eacute;ducation artistique et culturelle : rencontres, pratiques et connaissances.</p>\r\n<p>\u2028<br />La Communaut&eacute; de communes de la Thi&eacute;rache du centre veut contribuer pleinement &agrave; la r&eacute;ussite et &agrave; l&rsquo;&eacute;panouissement de chaque jeune en lui permettant, par l&rsquo;exp&eacute;rience sensible des pratiques, par la rencontre des &oelig;uvres et des artistes, par les investigations, de fonder une culture artistique personnelle, de s&rsquo;initier aux diff&eacute;rents langages de l&rsquo;art et de diversifier et d&eacute;velopper ses moyens d&rsquo;expression.</p>\r\n<h2>LE PROJET</h2>\r\n<p>Ayant pour but de sensibiliser les &eacute;l&egrave;ves au travail de l&rsquo;enregistrement et au traitement sonore, nous avons d&eacute;cid&eacute; de travailler &agrave; partir des voix enregistr&eacute;es des anciens du territoire narrant les contes et l&eacute;gendes propres au lieu. Le travail sur la voix, &eacute;l&eacute;ment concret et facilement identifiable par les &eacute;l&egrave;ves, permet une prise en main plus directe qu&rsquo;avec des &eacute;l&eacute;ments plus abstraits. Ces enregistrements ont constitu&eacute;s une premi&egrave;re s&eacute;ance, accompagn&eacute;e d&rsquo;une &eacute;coute d&rsquo;oeuvres spatialis&eacute;es, afin de sensibiliser le public &agrave; notre action future. En impliquant les anciens dans ce projet, aux c&ocirc;t&eacute;s des enfants et de leurs parents, nous souhaitons cr&eacute;er un lien inter-g&eacute;n&eacute;rationnel dans la cr&eacute;ation, afin d&rsquo;impliquer toutes les tranches d&rsquo;&acirc;ges de la population, autour d&rsquo;un &eacute;l&eacute;ment commun qui est la diffusion du patrimoine et de la culture du territoire.</p>\r\n<p>Notre prochain travail sera de r&eacute;aliser trois s&eacute;ances avec huit classes du territoire de la communaut&eacute; de communes, dans le but de sensibiliser &agrave; l&rsquo;&eacute;coute, &agrave; la pratique et &agrave; la r&eacute;alisation de s&eacute;quences sonores issues des enregistrements pr&eacute;c&eacute;demment r&eacute;alis&eacute;s. Ces ateliers auront lieu dans les m&eacute;diath&egrave;ques, afin de continuer &agrave; encrer ces derniers comme tiers-lieux culturels sur le territoire.</p>\r\n<p>Ces travaux se pr&eacute;senteront sous la forme de s&eacute;quences-jeux, sur des interfaces adapt&eacute;es. Nous souhaitons &eacute;viter autant que possible la pr&eacute;sence d&rsquo;&eacute;crans pour ses ateliers, afin que les &eacute;l&egrave;ves puissent assimiler le lien entre le geste et la cr&eacute;ation de mati&egrave;re sonore qui en r&eacute;sulte, au travers de l&rsquo;&eacute;coute.</p>\r\n<p>Pour ce faire chaque s&eacute;quence sera bas&eacute;e sur une action sonore diff&eacute;rente (traitement audio, placement de sources dans l&rsquo;espace), en relation avec des notions du son telles que la pr&eacute;sence, la place des objets dans l&rsquo;espace, la superposition, le murmure, la prosodie, ou les reflets. Ces travaux se feront par petits groupes d&rsquo;&eacute;l&egrave;ves, permettant des jeux en solo, duo ou petit groupe pour la cr&eacute;ation de mati&egrave;re ou de s&eacute;quence. Le but du travail en groupe est de familiariser les plus jeunes &agrave; l&rsquo;&eacute;coute collective, par le fait de jouer ensemble, ou de cr&eacute;er en bin&ocirc;me un m&ecirc;me son, de devoir se coordonner pour atteindre un r&eacute;sultat final.</p>\r\n<p>Suite &agrave; ces s&eacute;ances de d&eacute;couverte, une d&eacute;marche de cr&eacute;ation de s&eacute;quences sera propos&eacute;, suivi d&rsquo;une phase de compilation et de montage par les intervenants, qui m&egrave;nera &agrave; une restitution sous forme d&rsquo;installation sonore.</p>\r\n<h2>LES MISSIONS P&Eacute;DAGOGIQUES</h2>\r\n<p>Au-del&agrave; du travail sur le sonore, le but de ces ateliers est aussi de co-d&eacute;velopper une action p&eacute;dagogique et artistique sur le territoire, en proposant aux &eacute;l&egrave;ves un accompagnement artistique par des professionnels, une pratique artistique diff&eacute;rente, ainsi que la mise en place de savoirs-faire et de savoirs culturels. Cette mission a aussi pour but de sensibiliser &agrave; des pratiques artistiques diff&eacute;rentes et parfois nouvelles, afin d&rsquo;ouvrir le champs des propositions culturelles.</p>\r\n<p>De plus, en s&rsquo;ancrant sur les contes et l&eacute;gendes, notre volont&eacute; est forte de lier notre action &agrave; celle du territoire, en mettant en avant un patrimoine - trop souvent oubli&eacute; - afin de le faire red&eacute;couvrir aux plus jeunes ainsi qu&rsquo;au grand public.</p>\r\n<p><span>Gr&acirc;ce &agrave; l&rsquo;interaction cr&eacute;&eacute;e entre les anciens et les plus jeunes, en proposant nos s&eacute;ances dans les m&eacute;diath&egrave;ques, tiers-lieux culturels du territoire, nous souhaitons &eacute;galement favoriser l&rsquo;acc&egrave;s &agrave; ce projet &agrave; un public le plus large possible.&nbsp;</span></p>\r\n<h2>RESTITUTION</h2>\r\n<p>Suite au travail de co-construction avec les principaux acteurs du territoire, la proposition de restitution qui nous semble la plus adapt&eacute;e sera constitu&eacute;e d&rsquo;une pr&eacute;sentation, dans deux m&eacute;diath&egrave;ques, de la cr&eacute;ation des &eacute;l&egrave;ves sous forme d&rsquo;installation sonore. Cette installation comprendra un syst&egrave;me de diffusion multi-points sur 360 degr&eacute;s propos&eacute; par les Ateliers des</p>\r\n<p>Lutheries Num&eacute;riques, reprenant une oeuvre compos&eacute;e des diff&eacute;rentes s&eacute;quences r&eacute;alis&eacute;es par les &eacute;l&egrave;ves. Le but de cette installation est de proposer &agrave; touts les habitants de la Thi&eacute;rache du centre et aux curieux visiteurs de venir &eacute;couter et exp&eacute;rimenter ces contes sonores. Ils retrouveront ainsi les l&eacute;gendes du territoire sous une forme encore in&eacute;dite, pour une exp&eacute;rience sonore unique.</p>",
        "topics": [],
        "user": {
            "pk": 4936,
            "forum_user": {
                "id": 4933,
                "user": 4936,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/2cf9715dbfd6536230970f572fc608b1?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "anagrammestudio",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "immersion-sonore-contes-sonores-en-thierache",
        "pk": 230,
        "published": true,
        "publish_date": "2019-07-07T19:27:52+02:00"
    },
    {
        "title": "Conflict of Interest",
        "description": "”Conflict Of Interest” is a sonification of my own personal genetic analysis. Originally written for the Endangered Guitar, a hybrid interactive software/guitar instrument, this data set controls sonic processing as well as 8-channel spatialization in realtime. The piece also uses the data to interfere with the performer’s intentions. My genetic data set consists of approx. 600,000 lines of genetic variations, and was cross-referenced with approx. 150,000 research articles from publicly available databases. Data that is not associated with health issues, but with “identity” as a largely arbitrary, socially and historically constructed concept, is coming up at unexpected places, disrupts the performance and pushes the piece in a new direction. ",
        "content": "<p><a href=\"https://tammen.org/Conflict-Of-Interest\">https://tammen.org/Conflict-Of-Interest</a></p>",
        "topics": [
            {
                "id": 853,
                "name": "dna",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 850,
                "name": "experimental",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 854,
                "name": "genetics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 849,
                "name": "interactive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 642,
                "name": "Max/msp",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 852,
                "name": "multichannel sound",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 16747,
            "forum_user": {
                "id": 16744,
                "user": 16747,
                "first_name": "Hans",
                "last_name": "Tammen",
                "avatar": "https://forum.ircam.fr/media/avatars/Hans_Tammen_joergsteinmetz-medium.jpg",
                "avatar_url": "/media/cache/3b/97/3b976219def8982b587bdd88a7e557a3.jpg",
                "biography": "Hans Tammen likes to set sounds in motion, and then sit back to watch the movements unfold. Using textures, timbre and dynamics as primary elements, his music is continuously shifting, with different layers floating into the foreground while others disappear. His music flows like clockwork, “transforming a sequence of instrumental gestures into a wide territory of semi-hostile discontinuity; percussive, droning, intricately colorful, or simply blowing your socks off” (Touching Extremes).\n\nHis works have been presented at festivals in the US, Canada, Mexico, Russia, Ukraine, India, South Africa, the Middle East and all over Europe. Hans Tammen received grants and composer commissions from NewMusicUSA,  Chamber Music America, MAPFund, Mid-Atlantic Arts Foundation, American Music Center, Lucas Artists Residencies Montalvo, New York State Council On The Arts (NYSCA), New York Foundation For The Arts (NYFA), American Composers Forum w/ Jerome Foundation, Foundation for Contemporary Arts Emergency Funds, New York State Music Fund, Goethe Institute w/ Foreign Affairs Office, among others.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "hanstammen",
            "first_name": "Hans",
            "last_name": "Tammen",
            "bookmarks": []
        },
        "slug": "conflict-of-interest",
        "pk": 1230,
        "published": false,
        "publish_date": "2022-08-06T19:49:08.857189+02:00"
    },
    {
        "title": "FLUX",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p>FLUX is an immersive spatial audio composition designed for IRCAM&rsquo;s 6 channel speaker setup. The work explores the relationship between rivers, cities and people, illustrating commonalities and differences of the perception of rivers across the world. Utilising recordings of a range of different people speaking about their personal experiences with rivers, FLUX brings attention to the significance of rivers in our memories, daily lives, and communities. &nbsp;</p>\n<p>The use of spatial audio allows the audience to experience a sense of geographical distance in a physical environment and illustrates the interconnectedness of bodies of water.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 32945,
            "forum_user": {
                "id": 32897,
                "user": 32945,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6a1339760950a519a0910c128edfbbef?s=120&d=retro",
                "biography": "Ojasvani Dahiya is exploring creating interactive and immersive experiences that look at realities of the distant past and the far future which are grounded in the present. She is currently experimenting with new and emerging forms of technology to create visual experiences informed through sound and music. Her areas of interest are post-coloniality, identity, dreams and altered states of consciousness. Ojasvani graduated from Emerson College, Boston (2020) with a BFA in Media Arts Production, and went on to work in the Film/TV post-production industry in Los Angeles. She is currently on the Digital Direction MA program at the Royal College of Art.",
                "date_modified": "2023-11-06T21:49:51.196641+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "odahiya",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "flux-1",
        "pk": 2161,
        "published": false,
        "publish_date": "2023-03-25T15:34:22.230153+01:00"
    },
    {
        "title": "Voices in a bottle",
        "description": "Your voice is not alone. Mezzo Forte shares the web app \"Voices in a bottle\" to tie everyone’s voices beyond the boundaries of the measures against contagion: open the link, listen to the message that has been sent to you and record yours. Tell your story, share a thought or a citation, in 40 seconds: a virtual bottle will deliver it to someone you don’t know, who is tackling the COVID-19 emergency like you do.",
        "content": "<p><iframe width=\"425\" height=\"350\" src=\"//player.vimeo.com/video/402709210?title=0&amp;amp;byline=0\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p><span style=\"font-weight: 400;\">Mezzo Forte shares the web app <strong>Voices in a bottle</strong> </span></p>\r\n<p><a href=\"https://voicesinabottle.mezzoforte.design/\"><span style=\"font-weight: 400;\">https://voicesinabottle.mezzoforte.design/</span></a></p>\r\n<p><span style=\"font-weight: 400;\">to tie everyone&rsquo;s voices beyond the boundaries of the measures against contagion: open the link, listen to the message that has been sent to you and record yours. Tell your story, share a thought or a citation, in 40 seconds: a virtual bottle will deliver it to someone you don&rsquo;t know, who is tackling the COVID-19 emergency like you do.</span></p>\r\n<p><span style=\"font-weight: 400;\">The app also gives you the opportunity to make an act of solidarity, by sustaining the historic fundraising organized by the </span><a href=\"https://covid19responsefund.org/\"><strong>World Health Organization</strong></a></p>\r\n<p>&nbsp;</p>\r\n<p>En France, l'application permet de faire un geste de solidarit&eacute; en soutenant l&rsquo;initiative de la <strong><a href=\"https://t.co/SQVfl1OKl2?amp=1\">Fondation de France</a></strong> afin d'aider les soignants, les chercheurs et les personnes les plus vuln&eacute;rables.</p>\r\n<p>&nbsp;</p>\r\n<p><span class=\"css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0\">In Italia, tramite la app puoi compiere un gesto di solidariet&agrave; in favore dei reparti di terapia intensiva in Italia, sostenendo l&rsquo;iniziativa unitaria del Credito cooperativo </span><strong><a href=\"https://www.gofundme.com/f/fbt8dn-sosteniamo-le-terapie-intensive\"><span class=\"r-18u37iz\">#Terapie</span><span class=\"css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0\"> intensive contro il virus.</span></a></strong></p>",
        "topics": [],
        "user": {
            "pk": 17475,
            "forum_user": {
                "id": 17472,
                "user": 17475,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9b9282a6dd27d6634c7091765d065aa7?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "liuni",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "voices-in-a-bottle",
        "pk": 611,
        "published": true,
        "publish_date": "2020-03-31T22:27:28+02:00"
    },
    {
        "title": "SOUND ARCHEOLOGY: The Unearthing of A Material Aurality In Sound Art",
        "description": "2022",
        "content": "<p><span>How can we form-shift, probing with our ears into the </span><span>substrates</span><span> of the earth? What does the gusty </span><span>storm</span><span> sound like to that which is slowly </span><span>eroding</span><span> away? What do </span><span>sentinels of the sea </span><span>hear when they are </span><span>stranded on the land</span><span>? And how can we&mdash;as artists&mdash;investigate these slow formations of history through the senses of the ancient witnesses? Questions like these emerged from </span><span>a series of process-based interactions with sites and materials</span><span> and sparked the initial inspiration to combine sound practices with the subject of archaeology.&nbsp;</span></p>\r\n<p><span>A speculative proposition, a developing theoretical framework, a pseudo field guide, and an artistic exercise, Sound Archaeology is all of the above and, at its core, trans-disciplinary.&nbsp;</span></p>\r\n<p><span>The central hypothesis of Sound Archaeology is built on the fundamental shift in sound theory from the materiality of sonic phenomena to the aurality of materials. It supposes that as organisms and objects of the earth adapt to the ceaselessly changing environment in their lifespans, vibrational signals excavated from within them demonstrate the aptitude of archiving histories of a different temporal scale. Such ontological shift would endow human passengers like us with a crack of an opening through which we are able to peek with the &ldquo;ears&rdquo; of materials into the past.&nbsp;</span></p>\r\n<p><span>The following of the presentation is by no means an exhaustive inquiry into a new form of sound art but rather a hybrid collage of subjective recounts of bodily experience, cited research references, and work processes.&nbsp;</span><span>It will first touch on inventing new probes of recording the fields. Then, we will explore the application of material aurality in converging environmental recordings, human voices, and historical artifacts in the context of immersive sound art. And finally, we will introduce our ongoing project in collaboration with local communities derived from our recent field studies in the Atacama region of northern Chile.&nbsp;</span></p>\r\n<p><span><br /><br /></span></p>",
        "topics": [
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 944,
                "name": "materiality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 274,
                "name": "Soundart",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 524,
                "name": "Design et traitement sonores",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 30737,
            "forum_user": {
                "id": 30690,
                "user": 30737,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/%E4%B8%94%E7%94%A82.jpg",
                "avatar_url": "/media/cache/11/63/116369f10e2967280cf463f828b0b7cd.jpg",
                "biography": "Based in Queens, New York, Alchemyverse was founded in 2020 by Bicheng Liang (b. 1994, China) and Yixuan Shao (b. 1996, China/US). The duo combines their respective backgrounds in visual and sound arts as well as interests in craft and research, working across disciplines of print, sound, installation, and performance. They have conducted field works in places such as Oahu and Moa Kea, Hawaii, the American Southwest, the Hudson Highlands, and the Chilean deserts, collaborating with the land and resources as well as with local communities and institutions. Alchemyverse has exhibited at the School of Visual Arts (NY), Lenfest Center of the Arts (NY), Catherine Fosnot Art Gallery and Center (CT), LeRoy Neiman Gallery (NY), and the Bishop Museum (HI, in collaboration with Michael Joo). A recent alumnus of LMCC (Lower Manhattan Cultural Council) Arts Center Residency program (2021), they were also a finalist of the 2021 Monira Foundation Artist Residency Program and the 2022 Smack Mellon Artist in Residence Program. Currently, they are in residence at the International Studio & Curatorial Program (NY) and a participant in the annual leadership camp at Asia Art Archive in America.",
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "alchemyverse",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "sound-archeology-the-unearthing-of-a-material-aurality-in-sound-art",
        "pk": 1387,
        "published": true,
        "publish_date": "2022-09-30T06:07:10+02:00"
    },
    {
        "title": "Hello World",
        "description": "HELLO WORLD !!??!!\nExcuse my little stupid first Article but I will learn to do all the Simple Steps to manage my first little \"Hello World\" Project.\nSo in because of - you reading this - you can help me to do this step easy, give me replays to this simple Questions. ",
        "content": "<p>So this are the Questions.:</p>\n<ol>\n<li>What Informations do you want to check \"Is a Project Intressting for you\"</li>\n<li>Can you please explain \"What was the way you found this Project\"</li>\n<li>Can you please explain \"How is the normal way to find interesting Projects for You\"</li>\n</ol>\n<p>In the Next Time I will post new questions. ( In normal I will do it when I upload my first petch to this project )</p>",
        "topics": [
            {
                "id": 357,
                "name": "First-steps",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 354,
                "name": "Hello-world",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 17628,
            "forum_user": {
                "id": 17624,
                "user": 17628,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6389f37aeaee190f92e385b6a9b395f6?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "creco",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "hello-world",
        "pk": 588,
        "published": false,
        "publish_date": "2020-03-21T19:50:51.816545+01:00"
    },
    {
        "title": "Spatial Intelligence - Robert LISEK",
        "description": "The project proposes a new strategy for creating evolving VR structures (3D audio and graphics) based on the idea of ​​adaptation to a dynamically changing environment and with the use of advanced AI methods.",
        "content": "<p><span>The project proposes a new strategy for creating evolving VR structures (3D audio and graphics) based on the idea of ​​adaptation to a dynamically changing environment and with the use of advanced AI methods (transformer model). The project investigates a topology of oscillations, gradients and fluctuations where each 3d manifold potentially hosts new manifolds, balance of forces between tension and relaxation, expansion and contraction. It is a constantly evolving assembling and unfolding mechanism. There is no division between the performer and the environment. The \"inner\" space is topologically in contact with the \"outer\" space. By significantly expanding existing research, the project creates a machine learning model useful for testing various aspects of adaptation to a complex dynamic environment and sound </span><span>spatialization</span><span>.</span></p>",
        "topics": [],
        "user": {
            "pk": 21154,
            "forum_user": {
                "id": 21143,
                "user": 21154,
                "first_name": "",
                "last_name": "",
                "avatar": "https://forum.ircam.fr/media/avatars/Lisek_portrait_rb_lisek46_2.jpg",
                "avatar_url": "/media/cache/8c/c5/8cc537299368c10d31af34af793faaf4.jpg",
                "biography": "Robert B. Lisek is an artist, mathematician and composer who focuses on systems, networks and processes (computational, biological, social). He is involved in a number of projects focused on media art, creative storytelling and interactive art. Drawing upon post-conceptual art, software art and meta-media, his work intentionally defies categorization. Lisek is a pioneer of art based on Artificial Intelligence and Machine Learning. Lisek is also a composer of contemporary music, author of many projects and scores on the intersection of spectral, stochastic, concret music, musica futurista and noise. Lisek is a founder of Fundamental Research Lab and ACCESS Art Symposium. He is the author of 300 exhibitions and concerts, among others: SIBYL - ZKM Karlsruhe; SIBYL II - IRCAM Center Pompidou; QUANTUM ENIGMA - Harvestworks Center New York and STEIM Amsterdam; TERROR ENGINES - WORM Center Rotterdam, Secure Insecurity - ISEA Istanbul; DEMONS - Venice Biennale (accompanying events); Manifesto vs. Manifesto - Ujazdowski Cartel of Contemporary Art, Warsaw; NEST - ARCO Art Fair, Madrid; Float - Lower Manhattan Cultural Council, NYC; WWAI - Siggraph, Los Angeles.",
                "date_modified": "2025-04-15T22:29:55.560395+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lisek",
            "first_name": "",
            "last_name": "",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 220,
                    "emitter_object_id": 3282,
                    "user": 21154,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "spatial-intelligence",
        "pk": 2044,
        "published": true,
        "publish_date": "2023-02-08T17:43:01+01:00"
    },
    {
        "title": "SPAT Devices par Music Unit",
        "description": "Music Unit produit la collection SPAT, plugins Max For Live distribués par Ableton.",
        "content": "<p>Les plugins <a href=\"https://www.ableton.com/fr/packs/spat-bundle/\">SPAT</a>&nbsp;permettent d'agencer et d&eacute;placer des sources sonores dans des espaces audio r&eacute;els ou virtuels, en 2D ou 3D, gr&acirc;ce &agrave; des moteurs de spatialisation avanc&eacute;s, bas&eacute;s sur le processeur Spatialisateur d&eacute;velopp&eacute; &agrave;&nbsp;<a href=\"https://www.ircam.fr/\">l'IRCAM</a>&nbsp;depuis bient&ocirc;t trois d&eacute;cennies. <br><br><img alt=\"\" src=\"/media/uploads/user/6a7fac8a99cd6b475c4b13dc4c01c997.png\"><br><br>Les plugins sont propos&eacute;s en deux packs : SPAT Multichannel et SPAT Stereo.<br><br>SPAT Multichannel est destin&eacute; aux artistes, producteurs et ing&eacute;nieurs du son qui souhaitent tirer le meilleur parti de la configuration multicanale de leur studio ou salle de concert.<br><br>SPAT Stereo est destin&eacute; &agrave; celles et ceux qui disposent de configurations st&eacute;r&eacute;o simples (haut-parleurs, casque audio) et qui souhaitent tout de m&ecirc;me int&eacute;grer des techniques de spatialisation de haut niveau dans leurs productions.<br><br>SPAT Devices est d&eacute;velopp&eacute; par&nbsp;<a href=\"http://www.musicunit.fr/music-unit-fr/manuel-poletti\">Manuel Poletti</a>&nbsp;du studio&nbsp;<a href=\"http://www.musicunit.fr/musicunit-fr\">Music Unit</a>, en utilisant la biblioth&egrave;que SPAT Max d&eacute;velopp&eacute;e par l'&eacute;quipe&nbsp;<a href=\"https://www.ircam.fr/recherche/equipes-recherche/eac\">Espaces Acoustiques et Cognitifs</a>&nbsp;- STMS (Ircam, CNRS, Sorbonne Universit&eacute;, Minist&egrave;re de la culture) et diffus&eacute;e par&nbsp;<a href=\"https://ircamamplify.com/\">Ircam Amplify</a>.<br><br><br><img alt=\"SPAT devices in action\" src=\"/media/uploads/user/652b3ea2f2ea8143749c0a25bb4e4fa1.png\"><br><br></p>",
        "topics": [
            {
                "id": 203,
                "name": "Ableton live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 1133,
                "name": "Max for live",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 23,
            "forum_user": {
                "id": 23,
                "user": 23,
                "first_name": "Manuel",
                "last_name": "Poletti",
                "avatar": "https://forum.ircam.fr/media/avatars/PortraitMU_Manuel_Poletti.jpeg",
                "avatar_url": "/media/cache/25/a9/25a94fa5eedfb0e20cf188183156a531.jpg",
                "biography": "Sound artist and composer, computer music designer at IRCAM and consultant at Cycling'74, Manuel Poletti is in charge within Music Unit of the development of large format sound installation projects and software technologies dedicated in particular to augmented instrument, computer-assisted composition and sound spatialization. Manuel collaborates regularly with many leading contemporary artists with whom he creates elaborate sound systems and content in the fields of stage, art, design and architecture.",
                "date_modified": "2026-02-05T12:39:13.481208+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 4,
                        "forum_user": 23,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "poletti",
            "first_name": "Manuel",
            "last_name": "Poletti",
            "bookmarks": []
        },
        "slug": "spat-devices-par-music-unit",
        "pk": 2048,
        "published": false,
        "publish_date": "2023-02-09T10:21:06.529115+01:00"
    },
    {
        "title": "The Making of Electric Rain Krems",
        "description": "Rain as a sounding and climatic phenomenon is the starting point for the sound installation Electric Rain Krems (2022). Water is essential for all life on earth, and rain is a central part of the water cycle. Rain is the result of earth's climate system, but it is also a complex sound phenomenon. Electric Rain Krems employs a ninety-six channel sound system for artistic work with rain sounds. Exhibited at Klangraum Krems, Austria, 9 June – 2 October 2022.",
        "content": "<p>The auditive qualities of rain can seem arbitrary, but there are complex systems involved; the distribution of each drops position, size and quantity are some of the many elements that affect the sound of rain. In the sound installation <em>Electric Rain Krems</em> (2022) the sounding properties of rain are explored through a large number of sound-objects. Ninety-six individually controlled speakers are situated in the exhibition space Klangraum Krems, enveloping the listener in a three-dimensional soundscape.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/26354cd59c3979235e6629d15253a166.jpg\" /></p>\r\n<p>Photo: Asbj&oslash;rn Blokkum Fl&oslash;.<strong>&nbsp;</strong></p>\r\n<p>Klangraum Krems is a space for sound art and music in the Minoritenkirche, a former monastery church from the 13th century. The Minoritenkirche is located in Stein an der Donau, an old medieval city along the Danube River. The acoustics of the old monastery church creates a clear and open sound, with a long reverberation time. The space is divided into three parts, two aisles and a central nave, where the nave has more than double the ceiling height of the two aisles. The divisions of the space provide natural acoustic separations in which the sound installation can interact.</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/2166d989e83067eb9aec1195682109a7.jpeg\" /></p>\r\n<p>Photo: Asbj&oslash;rn Blokkum Fl&oslash;.<strong>&nbsp;</strong></p>\r\n<p><strong>Custom-built electronics</strong><br />Using the MADI standard, three thirty-two channel DA-converters feed a custom-built ninety-six channel amplifier. The custom-built loudspeakers use coaxial speaker elements. This results in a more even radiation pattern, and the cylindrical speaker cabinets are also designed with a more equal radiation pattern in mind. Circuits and circuit boards for the amplifier and the loudspeaker crossover was designed in collaboration with Hans Wilmers and Thom Johansen at Notam (the Norwegian Centre for Technology, Art and Music).</p>\r\n<p><img alt=\"\" src=\"https://forum.ircam.fr/media/uploads/user/f309f5bf923e26750bba963fcd083bc1.jpeg\" /></p>\r\n<p>Photo: Asbj&oslash;rn Blokkum Fl&oslash;.</p>\r\n<p><strong>Organising studio recordings using MuBu</strong><br />To fully utilize such a large system, different types of sound material are used. Field recordings, studio recordings and synthetic sound is blended to create flexible models of rain sounds. The field recordings are loaded into buffers, and various parts of the buffers are distributed across the ninety-six speakers. Because of the granular nature of rain sounds, this results in a spatially distributed, granular synthesis.</p>\r\n<p>Approximately two thousand studio recordings of single water drops have been made, where water drops of different sizes fall on different materials such as metal, wood, glass, plastic, cardboard, and fabric. Ircam&rsquo;s MuBu Max externals was used for analysing and organizing these recordings. Segmented audio data with audio descriptors was utilized to prepare the recordings for the installation. Furthermore, MuBu concatenative synthesis is used to play back segments of audio buffers based on the analysis.</p>\r\n<p>The sound of singular drops has also been achieved using synthesis, controlling the frequency, spectral balance, density, and combination into various structures. There is a body of research on how sound relates to meteorological phenomena, and several models for simulation of the acoustical and physical qualities of rain exist. This installation ended up using algorithms for the synthesis of rain sounds developed by the musician Katsuhiro Chiba, algorithms that are at the same time elegant, simple, and good sounding.</p>\r\n<p><strong>Organising space using Spat</strong><br />Ircam&rsquo;s Spat Max externals has been used for working with the spatial aspects of the installation. The sounds were distributed in space with variation in placement, density, and movement. Field recordings were used for creating diffuse sound fields, while singular sounds were used as point sources with precise placement in the space.</p>\r\n<p>The K-Nearest Neighbours (KNN) panning algorithm was the main panning algorithm used in the installation. The KNN panning algorithm does not depend on a sweet spot to be perceived correctly and is well suited to large scale sound installations where the listener has constantly changing listening positions.</p>\r\n<p>This panning algorithm makes it possible to define the maximum number of contributing speakers as well as the spread of the virtual source. In this way the sound source can vary from anything between a single point source, to large diffuse sound fields across the entire sound system.</p>\r\n<p>In addition to the Spat Max externals, two other methods were used for spatial work. One is the distribution of large granular sound fields based on field recordings, as described earlier. The other is simply using single loudspeakers as singular sound sources. Because of the high number of speakers spread across a large reverberant space, this method turns out to be very effective.</p>\r\n<p>These three methods combined; distributed field recordings, KNN and single loudspeakers, makes for a flexible and powerful combination of spatial tools.&nbsp;</p>\r\n<p><strong>Along the Danube River</strong><br />Rain and water have been important for the area along the Danube River, a cultural landscape with roots going back to prehistoric times. When we move the sound of rain from the agricultural areas on the outside, into a monastery church from the 13th century, a separate layer of potential interpretations arises.&nbsp;</p>\r\n<p>Rain is the result of earth's climate system, but it is also a complex sound phenomenon. In the installation <em>Electric Rain Krems</em>, this sounding dimension is examined through ninety-six individually controlled speakers that fill the space and present the listener with an all-encompassing sound field. The sound field is coloured by the acoustic properties of the space, and changes as you move through the old monastery church.</p>\r\n<p><iframe width=\"850\" height=\"700\" src=\"https://player.vimeo.com/video/811637974?title=0&amp;byline=0&amp;portrait=0&amp;color=8dc7dc\" allowfullscreen=\"allowfullscreen\"></iframe></p>\r\n<p>Exhibited at Klangraum Krems, 9 June &ndash; 2 October 2022.</p>\r\n<p>Parts of this text are based on the text <em>About the Construction of Electric Rain</em> (J&oslash;ran Rudi, 2018).</p>\r\n<p><em>Electric Rain Krems</em> was made with support from Music Norway and produced by Klangraum Krems.</p>\r\n<p>Technology developed and produced at Notam (the Norwegian Centre for Technology, Art and Music) by Asbj&oslash;rn Blokkum Fl&oslash;, Thom Johansen and Hans Wilmers.</p>\r\n<p>Thanks to Stefan Bauer, Franco Gatty, Liselotte Grand, Paula Haslinger, Michael Huber, Fabian Lang, David Lang, Henning Linaker and Ernst Steindl.</p>\r\n<p>Video, sound, and editing: Asbj&oslash;rn Blokkum Fl&oslash;<br />Camera assistant: Fabian Lang</p>\r\n<p>More about <em>Electric Rain Krems</em>:<br /><a href=\"https://www.asbjornflo.net/en/art/electric-rain-krems/\">https://www.asbjornflo.net/en/art/electric-rain-krems/</a></p>\r\n<p><span style=\"text-decoration: line-through;\">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></p>\r\n<p><strong>Biography</strong></p>\r\n<p>Asbj&oslash;rn Blokkum Fl&oslash; (b. 1973, Volda) lives and works in Oslo. Fl&oslash;&rsquo;s works treat sound as both a means of communication and as purely abstract sound material, and in his works he has worked on both objects from music history and the intrinsic value of sound.</p>\r\n<p>In the installation <em>Doppelg&auml;nger</em> (Bergen Kunsthall, 2014) the materiality of sound and sounding metal objects are the center of attention while rain as a sonic and climatic phenomenon was the starting point for the site-specific sound installation <em>Electric Rain Krems</em> (2022), exhibited at Klangraum Krems in Krems, Austria. These works are within a critical sound art tradition, but Fl&oslash;&rsquo;s works are nevertheless more often sensual and immediate.</p>\r\n<p>Fl&oslash;s works have been shown, performed and presented at, among others, DEAF (Dutch Electronic Arts Festival, Rotterdam, The Netherlands), ICMC (International Computer Music Conference, Denton, USA), Klangraum Krems (Krems, Austria), NIME (New Interfaces for Musical Expression, Baton Rouge, USA), Synth&eacute;se (International Festival of electronic music and sonic art, Bourges, France), Sound around Kaliningrad (International Forum of Experimental Music and Sound Art, National Center for Contemporary Arts (NCCA), Kaliningrad, Russia), Prix Italia (Prix Italia, Radio Television Italiana, Italy), Ars Acustica (EBU Ars Acustica Group, European Broadcasting Union), and Bergen Kunsthall, Henie-Onstad Art Center, Kunstnernes hus, Atelier Nord, Ultima, Borealis Festival, Ibsen Festival, Ekko Festival and Greenland Chamber Music Festival in Norway.</p>",
        "topics": [
            {
                "id": 95,
                "name": "Acoustics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 143,
                "name": "Ecology",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 297,
                "name": "Electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 272,
                "name": "Generative",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 153,
                "name": "Immersive",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 62,
                "name": "Max",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 61,
                "name": "Mubu",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 157,
                "name": "Real-time",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 109,
                "name": "Spat",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 39,
                "name": "Spatialisation",
                "status": 2,
                "is_faceted": true,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 25434,
            "forum_user": {
                "id": 25407,
                "user": 25434,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/87f103a1c5152b473cb6bc948323a3bc?s=120&d=retro",
                "biography": null,
                "date_modified": "2025-06-20T15:32:42.051405+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "asbjornflo",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "the-making-of-electric-rain-krems",
        "pk": 2207,
        "published": true,
        "publish_date": "2023-04-12T16:31:16+02:00"
    },
    {
        "title": "Neural Speech Synthesis",
        "description": "Presented during the IRCAM Forum @NYU 2022",
        "content": "<p>In this talk, Nicolas Obin and Axel Roebel from the Sound Analysis and Synthesis (AS) team will present their latest research on neural speech synthesis with a particular focus on three axis: speech synthesis using neural vocoder, neural voice identity conversion with few-shot learning, and neural speech emotion transformation.</p>\r\n<p>The talk will be illustrated using numerous examples including the vocal deep fake reconstruction of past personalities, such as the French comedian and singer Dalida or the father of science-fiction Isaac Asimov.</p>",
        "topics": [],
        "user": {
            "pk": 18040,
            "forum_user": {
                "id": 18034,
                "user": 18040,
                "first_name": "Nicolas",
                "last_name": "Obin",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/9ebc983a81802c3a39a1531605bc7c62?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-12-10T16:42:35.479170+01:00",
                "is_premium": true,
                "is_internal_user": true,
                "vip": false,
                "notify_updated_contents": true,
                "notify_new_project_discussion_threads": true,
                "has_newsletter_subscription": false,
                "memberships": [
                    {
                        "id": 222,
                        "forum_user": 18034,
                        "date_start": "1970-01-01",
                        "date_end": "2126-03-05",
                        "type": 0,
                        "keys": [
                            {
                                "id": 373,
                                "membership": 222
                            }
                        ],
                        "type_string": null,
                        "num_keys": 5000,
                        "is_valid": true
                    }
                ]
            },
            "username": "nobin",
            "first_name": "Nicolas",
            "last_name": "Obin",
            "bookmarks": [
                {
                    "event_name": "follow",
                    "emitter_content_type": 107,
                    "emitter_object_id": 576,
                    "user": 18040,
                    "subscription_meta": {}
                }
            ]
        },
        "slug": "neural-speech-synthesis",
        "pk": 1333,
        "published": true,
        "publish_date": "2022-09-13T12:30:11+02:00"
    },
    {
        "title": "Echolocation I",
        "description": "This presentation will serve as an immersive city soundscape mixing real-time live Police and EMS audio with that of The Conet Project, A collection of over 130 Shortwave Number Station Recordings curated into a comprehensive five disk compilation by British record label Iridal-Discs. These shortwave recordings are an amalgamation of radio transmissions, train station lullabies, lo-fi radio interviews, instructions, and static infused counting exercises. Unlike other pirate shortwave broadcasts, these eerie transmissions would alternate for years without any pause scheduled at seemingly random times throughout the years, leading many to believe that these broadcasts were a government operated espionage system, used to contact central intelligence agencies. During the performance I plan to reroute the EMS and Police broadcast signals of New York City nearest  the venue and will be playfully manipulating the ethereal nature of both the live and pre-recorded radio signals, and amplifying two separate sets of opaque transmissions. ",
        "content": "",
        "topics": [],
        "user": {
            "pk": 30994,
            "forum_user": {
                "id": 30947,
                "user": 30994,
                "first_name": "Kamari",
                "last_name": "Carter",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/0434b164756392b7e0174368f6d09a1d?s=120&d=retro",
                "biography": "Kamari Carter (b. 1992) is a producer, performer, sound designer, and installation artist primarily working with sound and found objects. Carter's practice circumvents materiality and familiarity through a variety of recording and amplification techniques to investigate notions such as space, systems of identity, oppression, control, and surveillance. Driven by the probative nature of perception and the concept of conversation and social science, he seeks to expand narrative structures through sonic stillness. Carter’s work has been exhibited at such venues as Automata Arts, MoMA, Mana Contemporary, RISD Museum, Flux Factory, Lenfest Center for the Arts, WaveHill and has been featured in a range of major publications including ArtNet, Precog Magazine, LevelGround and WhiteWall. Carter holds a BFA in Music Technology from California Institute of the Arts and an MFA in Sound Art from Columbia University.",
                "date_modified": "2022-08-26T07:40:45+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "kamari",
            "first_name": "Kamari",
            "last_name": "Carter",
            "bookmarks": []
        },
        "slug": "echolocation-i",
        "pk": 1325,
        "published": false,
        "publish_date": "2022-09-12T17:44:17.198425+02:00"
    },
    {
        "title": "FLUX",
        "description": "Presented during the IRCAM Forum Workshops 2023 in Paris",
        "content": "<p>FLUX is an immersive spatial audio composition designed for IRCAM&rsquo;s 6 channel speaker setup. The work explores the relationship between rivers, cities and people, illustrating commonalities and differences of the perception of rivers across the world. Utilising recordings of a range of different people speaking about their personal experiences with rivers, FLUX brings attention to the significance of rivers in our memories, daily lives, and communities. &nbsp;</p>\n<p>The use of spatial audio allows the audience to experience a sense of geographical distance in a physical environment and illustrates the interconnectedness of bodies of water.&nbsp;</p>",
        "topics": [],
        "user": {
            "pk": 32945,
            "forum_user": {
                "id": 32897,
                "user": 32945,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/6a1339760950a519a0910c128edfbbef?s=120&d=retro",
                "biography": "Ojasvani Dahiya is exploring creating interactive and immersive experiences that look at realities of the distant past and the far future which are grounded in the present. She is currently experimenting with new and emerging forms of technology to create visual experiences informed through sound and music. Her areas of interest are post-coloniality, identity, dreams and altered states of consciousness. Ojasvani graduated from Emerson College, Boston (2020) with a BFA in Media Arts Production, and went on to work in the Film/TV post-production industry in Los Angeles. She is currently on the Digital Direction MA program at the Royal College of Art.",
                "date_modified": "2023-11-06T21:49:51.196641+01:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "odahiya",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "flux-2",
        "pk": 2162,
        "published": false,
        "publish_date": "2023-03-25T15:34:35.727949+01:00"
    },
    {
        "title": "Spacebar Counter",
        "description": "The spacebar counter helps you count how many times you can hit the spacebar remote control in the time offered.",
        "content": "<p><strong>What is Spacebar Counter?</strong></p>\n<p>The <a href=\"http://spacebarcounter.org\">space bar counter</a> helps you count how many times you can hit the spacebar remote control in the time offered. Generally, some video games require you to use the spacebar for essential activities like shooting or jumping. So, you better get to it! I don't think I need to explain where the spacebar is on the keyboard (everyone knows it, duh!). So, let's see how journalism fares in the Spacebar 1000 exam without further ado.</p>\n<p><strong>Why do you require a spacebar speed test?</strong></p>\n<p>You must focus on increasing your spacebar clicking speed if you are a gamer. The spacebar key is where your character jumps or crawls in numerous games. So, how quickly can you jump or hide behind an obstacle to fool your opponent? Below, the spacebar counter clicks do the duty for you, and you can exercise your spacebar click rate on the spacebar counter. If your work is related to document production, where you create multiple Word files every day, you understand that file development is not an easy task, and the work requires a high degree of excellence. Also, everyone knows how fast you are kind; you can get your work done faster.</p>\n<p><strong>How does the Spacebar Counter Tool Work?</strong></p>\n<p>It takes about 3 seconds to understand how this tool works. However, I will quickly discuss how it is used. Usually, software programmers need to explain how things work. After being associated with a web page, all you have to do is push the spacebar. Each time you push it, start counting down the counter. If you want to measure your speed, I suggest running a timer before you start pushing the spacebar. The restart switch also allows you to reset the checking process. The spacebar counter tool is easy to use. Please call me if you have problems using it in your internet browser. Let me explain to you how it is used.</p>",
        "topics": [
            {
                "id": 706,
                "name": "Space bar clicker",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 865,
                "name": "spacebar clicker",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 708,
                "name": "Space bar counter",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 707,
                "name": "Spacebar counter",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 30897,
            "forum_user": {
                "id": 30850,
                "user": 30897,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/d574f009f81c30ef18dd793696a41ed3?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "david09sm",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "spacebar-counter",
        "pk": 1232,
        "published": false,
        "publish_date": "2022-08-09T08:17:05.730444+02:00"
    },
    {
        "title": " Musicien de la vieillesse",
        "description": "Bonjour à tous,Je suis un homme plus âgé qui joue principalement du jazz toute ma vie.Dernièrement, ma main a commencé à me faire un peu plus mal",
        "content": "<p>Bonjour &agrave; tous,</p>\n<p>Je suis un homme plus &acirc;g&eacute; qui joue principalement du jazz toute ma vie.</p>\n<p>Derni&egrave;rement, ma main a commenc&eacute; &agrave; me faire un peu plus mal. Je joue de la basse et du piano.</p>\n<p>Je me demandais si l'un de vous autres musiciens avec plus d'exp&eacute;rience :) a d&eacute;j&agrave; eu ce probl&egrave;me et comment vous le g&eacute;rez.</p>\n<p>Mon cousin m'a donn&eacute; <a href=\"https://www.cibdol.fr/huile-cbd\">&ccedil;a</a>. Je pense que cela aide mais je ne suis pas encore s&ucirc;r. Laissez-moi savoir ce que vous pensez.</p>",
        "topics": [],
        "user": {
            "pk": 19041,
            "forum_user": {
                "id": 19034,
                "user": 19041,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/3d66610ff0708ff5d8915e61fee18d75?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "lucalisa",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "musicien-de-la-vieillesse",
        "pk": 789,
        "published": false,
        "publish_date": "2020-11-11T13:15:32.430530+01:00"
    },
    {
        "title": "Tutorial: Generating Acoustic Scores with AudioGuide",
        "description": "Learn how to use AudioGuide v1.74 and Bach in Max/MSP to generate acoustic instrument parts for any sized ensemble.",
        "content": "<p>AudioGuide now lets you to do concatenative synthesis to create instrumental parts for acoustic ensembles of any size. This is facilitated by some new options-file objects as well as an output file format for bach.roll.</p>\r\n<p>AudioGuide's instrument writing approach revolves around three principles:</p>\r\n<p><strong>1. detailed notation </strong>that let's you denote articulation, text, and noteheads (thanks to bach's slots).</p>\r\n<p><strong>2. temporal controls</strong> that let you define how each instrument may behave in time to ensure that generated scores are playable and idiomatic, including fine tuning the speed of notes, leapiness, polyphonic possibilities, and adding delays when changing from one playing technique to another.&nbsp;</p>\r\n<p><strong>3. flexible instrument objects </strong>that support writing for traditional instruments, but also let you design bespoke, experimental instruments for a wide variety of performative contexts.</p>\r\n<p>&nbsp;</p>\r\n<p>Consider the following example. Using a <a href=\"http://www.benhackbarth.com/audioGuide/tutorials/instruments/ex-target.mp3\">short target sound of speech</a> and a large corpus of string instrument sounds, AudioGuide was used to generate the following bach.roll data for string quartet:</p>\r\n<p><img src=\"/media/uploads/user/19b80584974b195a6558aeb06083960e.png\" alt=\"\" width=\"1456\" height=\"714\" /></p>\r\n<p>Here is AudioGuide's <a href=\"http://www.benhackbarth.com/audioGuide/tutorials/instruments/ex-mix.mp3\">audio output of the target and quartet mixed</a>, just <a href=\"http://www.benhackbarth.com/audioGuide/tutorials/instruments/ex-quartet.mp3\">the quartet</a> as well as <a href=\"http://www.benhackbarth.com/audioGuide/tutorials/instruments/ex-vln1.mp3\">violin 1</a>, <a href=\"http://www.benhackbarth.com/audioGuide/tutorials/instruments/ex-vln2.mp3\">violin 2</a>, <a href=\"http://www.benhackbarth.com/audioGuide/tutorials/instruments/ex-vla.mp3\">viola</a>, and <a href=\"http://www.benhackbarth.com/audioGuide/tutorials/instruments/ex-vc.mp3\">cello</a> parts.</p>\r\n<p>&nbsp;</p>\r\n<p>You can learn more about all of the possibilities in a new video tutorial:</p>\r\n<p><iframe width=\"640\" height=\"640\" src=\"//www.youtube.com/embed/AyD_ZYjff2c\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"></iframe></p>\r\n<p>&nbsp;</p>\r\n<p>If you're new to AudioGuide, check out the detailed <a href=\"https://www.youtube.com/watch?v=ZqjYjDLZRAw&amp;list=PL0KTlumV2e1Z-6YOegignhnPA5OVPDuFm\">5-part tutorial series</a>. The new instrument writing infrastructure is documented <a href=\"http://www.benhackbarth.com/audioGuide/docs_v1.74.html#TheINSTRUMENTSVariable\">here</a>.</p>",
        "topics": [
            {
                "id": 172,
                "name": "Analyse du son",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 70,
                "name": "Audio",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 321,
                "name": "Concatenative synthesis",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 40,
                "name": "Orchestration",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 282,
                "name": "Tutorials",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 1042,
            "forum_user": {
                "id": 1042,
                "user": 1042,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/90f462285c49a9b4aafe40ec8cac2cc9?s=120&d=retro",
                "biography": null,
                "date_modified": "2024-04-17T13:20:36.063451+02:00",
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "hackbarth",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "tutorial-generating-acoustic-scores-with-audioguide",
        "pk": 1031,
        "published": true,
        "publish_date": "2022-01-10T18:15:39+01:00"
    },
    {
        "title": "Live Electronics: Performer Agency and Audience Reception",
        "description": "My research is focused on developing new methods of performer agency in live electronic music and utilising audio-visual symbiosis to enhance audience engagement. I do this through the design of new gestural software instruments and the development of new strategies of conveying musical expression in the performance of live electronics. \nI will present my designs for two works, The Phonetics Project (voice+electronics) and Hear to Listen (flute+electronics). \n",
        "content": "",
        "topics": [
            {
                "id": 310,
                "name": "Audience reception",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 101,
                "name": "Gesture",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 308,
                "name": "Live electronics",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 102,
                "name": "Movement",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 6732,
            "forum_user": {
                "id": 6729,
                "user": 6732,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/e1e0db848bb740557c35e435ff4a5c88?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "jennkirby",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "live-electronics-performer-agency-and-audience-reception",
        "pk": 458,
        "published": false,
        "publish_date": "2020-02-05T11:43:33.572549+01:00"
    },
    {
        "title": "Entering the musician’s world: the use of VR to heighten emotions in music. ",
        "description": "The emergence of immersive technologies, such as Virtual Reality (VR), might change the way people produce and consume music in the modern age. However, little research has regarded these technologies’ influence on the users’ perception and experience of music.   \n\nThis study evaluated participants’ sense of presence in the display environment and their music-induced and perceived emotions when the same music video is in VR or on desktop. Variables were measured using self-report questionnaires and emotions followed a three-dimensional model (pleasantness, tense arousal and energetic arousal). Participants experienced a higher sense of presence and more pleasant emotions in VR compared to the desktop condition. Moreover, there were significant correlations between presence and music-induced and perceived emotions in both conditions. Finally, quantitative analysis and interview data revealed a possible influence of previous experience with VR and other variables related to the technology, media content and user characteristics on the findings.\n",
        "content": "",
        "topics": [
            {
                "id": 303,
                "name": "Immersion",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 304,
                "name": "Music-induced emotions",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 305,
                "name": "Music-perceived emotions",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            },
            {
                "id": 301,
                "name": "Virtual reality",
                "status": 2,
                "is_faceted": false,
                "is_featured": false
            }
        ],
        "user": {
            "pk": 18045,
            "forum_user": {
                "id": 18039,
                "user": 18045,
                "first_name": "",
                "last_name": "",
                "avatar": null,
                "avatar_url": "https://www.gravatar.com/avatar/c93de44b4a499d4162d9e7c1d8fea8dc?s=120&d=retro",
                "biography": null,
                "date_modified": null,
                "is_premium": false,
                "is_internal_user": false,
                "vip": false,
                "notify_updated_contents": false,
                "notify_new_project_discussion_threads": false,
                "has_newsletter_subscription": false,
                "memberships": []
            },
            "username": "theryalam",
            "first_name": "",
            "last_name": "",
            "bookmarks": []
        },
        "slug": "entering-the-musicians-world-the-use-of-vr-to-heighten-emotions-in-music",
        "pk": 455,
        "published": false,
        "publish_date": "2020-02-04T14:13:21.644974+01:00"
    }
]