This 8 channel fixed media electroacoustic video work encompasses noise and acousmatic genres. The sonic output is controlled by custom real-time processing and surround sound panning in Max and Ableton. We are particularly interested in the role of gesturally-controlled real-time signal processing in a multichannel environment and the exploration of live sound diffusion practices that balance automated and performative gestures (as demonstrated by the mediatized performance in the final work). Overall, we make tangible the complex interplay between the software and hardware elements of our bespoke hybrid systems and visually demonstrate the role of the human performers, while embracing the potential of a surround sound environment as a catalyst for electroacoustic composition.
Although we clearly focus on performative dimensions, the primary research field is electroacoustic composition, which is often summed up via Dhomont’s assertion that ‘imagination gives wings to intangible sound’, meaning that the visual modality must be removed and the sound experienced acousmatically (sound unseen). The secondary research field is surround sound diffusion in the tradition of acousmonium multichannel systems and the performance practices and intellectual traditions that are associated with this.
Our research question with this project is: to what extent can gestural interaction with bespoke tools in a multichannel sonic environment be a catalyst for new compositional work?
OAT was peer reviewed by two international juries and selected/presented: 1) in concert at the Loreto Theatre at The Sheen Center for Thought & Culture as part of New York City Electroacoustic Music Festival (NYCEMF) in 2024; 2) in concert as part of the International Computer Music Conference (ICMC) 2024 in Seoul, South Korea.