Loading
...

interview

Farzané & Arash: Artificial Acoustemology

Loes de Boer (photography)

Loes de Boer

Loes de Boer (b.1999) is a lens-based artist living in Amsterdam, The Netherlands.
She completes her bachelor’s degree in photography from the Royal Academy of Art, The Hague, graduating in 2024.

Her creative process is deeply rooted in personal reflection, drawing inspiration from her experiences and observations. 
Within the realms of (self) portraiture and still-life, she focuses on the queer side of society; deconstruction and metamorphosis play a fundamental role. The camera becomes a tool for redefining our understanding of the self. Through dynamic manipulation, reshaping, and re-contextualization of images into an installation or publication, Loes brings the performative aspect into her work.

Her previous works were exhibited in The Grey Space in the Middle, Den Haag (2023)
and Museum De Fundatie, Zwolle (2019).

In 2023, she did an internship at Paul Kooiker.

Farzané (author)

Farzané

Farzaneh Nouri is a musician, researcher, and sound artist based in the Netherlands. Interested in experimental approaches to sound, science, and technology, she explores various disciplines such as electroacoustic music composition, computer science, and linguistics. As tools in her artistic practice, she uses creative coding, field recording, live electronics, as well as acoustic and programmable instruments. Her recent pieces investigate complex systems, natural algorithms, and human-machine interaction. Farzaneh composes for live performances, interactive installations, films, VR, and multi-sensory experiences. Her current research focuses on artificial intelligence methods in the framework of live electroacoustic music improvisation.


More with Farzané

Embedded/Embodied

Farzané    Arash Akbari   

Embedded/Embodied is Farzané & Arash' interactive digital art commission for The Couch in collaboration with Sonic Acts.

view the work here
Arash Akbari (author)

Arash Akbari

Arash Akbari is a transdisciplinary artist. His interest in dynamic art systems, human perception, nonlinear narrative, and the co-existence between physical and digital worlds compelled him to explore the fields of generative systems, interaction design, immersive technologies, and real-time processing. With a critical mindset toward the dominant paradigm of technology, he examines the counternarratives in which computational processes, interactive cybernetic systems, and their emergent behaviors can evoke concepts, ideas, and questions as well as social and emotional responses and impacts. Akbari directs his experimental practices into audio-visual performances and installations, interactive software, and multisensory experiences.

His music compositions investigate experimental approaches to sound generation, field recordings, acoustic instrumentation, digital synthesis, DSP, and noise to create immersive sonic environments that explore the agency of autonomous systems, audification, indeterminacy, memory, and the perception of time and space.


More with Arash Akbari

music

Arash Akbari & Spekki Webu – Veiled Fluxes

Arash Akbari    Spekki Webu   

A new track and video clip from an artistic match made in heaven

read more

Embedded/Embodied

Farzané    Arash Akbari   

Embedded/Embodied is Farzané & Arash' interactive digital art commission for The Couch in collaboration with Sonic Acts.

view the work here
Published 07 Mar 2024

15

min read
+
Image by Loes de Boer

Can we develop a machine learning approach that is situated in cultural, ecological, or cosmological terms? Farzané and Arash have been artistic collaborators since 2017. In this interview, they let us into their working partnership, sharing the complementary ways in which they deal with artificial intelligence. They share a fascination with acoustemology - the practice of ‘knowing’ and learning through sound - and talk about how involving an AI agent in as subjective a practice as acoustemology creates new problems with and new insights into the fabric of knowledge.

You can view Embedded/Embodied, Farzané and Arash’ art commission for The Couch, here.

How long have you two been working together?

Arash: We’ve been working together on different projects since 2017. These projects range from curatorial projects and audio-visual performances to interactive installations and XR experiences.

How do your practices complement each other?

Arash: Each of us has our ongoing artistic research projects. And some of our concerns and ideas overlap and resonate, so we work on them as collaborative projects. To me, collaboration is more than just sharing tasks on a team. The synergy of collaboration that drives the initial concepts, ideas, and the creative process cannot be achieved in personal projects. It doesn’t mean that artists shouldn’t or can’t work alone, but that they become a different version of themselves in collaborations.

Farzané: Each of us possesses a unique set of skills and areas of expertise, yet our knowledge also converges in certain areas. This dynamic allows us to complement each other’s works and ideas, engaging in conversations that encourage speculation from alternative perspectives and foster thought-provoking discussions.

It doesn’t mean that artists shouldn’t or can’t work alone, but that they become a different version of themselves in collaborations.

+
Image by Loes de Boer

What's the most significant thing you’ve learned from each other?

Arash: Farzané has her own working routine, problem-solving, and creative processes. I really like the way she approaches sound. Although it's very experimental and abstract, the narrative is there. Aesthetic choices are planned, though on another level, embedded within the system itself. Striking a balance between the agency of a computational system and the artistic compositional elements is truly remarkable in her work. She’s developing her own unique style. Being in touch with these processes is inspiring to me. It brings new perspectives that I always consider, even in my solo projects.

Farzané: Arash is determined, hard-working, and persistent. Working alongside him has taught me the importance of striking a harmonious balance between the technical, artistic, and conceptual elements in the creative process. Instead of letting the technical aspects overwhelm, harnessing them as a means of creative self-expression. Also, he is less likely to panic in critical situations at work as much as I tend to (:-)), and tries to find solutions. This has definitely influenced our working dynamics.

Tell me about the work you are installing at Zone2Source in Amsterdam for the Sonic Acts Biennial 2024, and about the work you are presenting on The Couch.

Arash: The piece we are presenting is called “Embedded/Embodied”. It’s a speculation on the way AI recognition, generation, and communication can be approached within the paradigm of acoustemology. It also taps into the epistemological possibility of using Augmented Reality as a situation for developing a relational form of sonic knowledge through the interplay between hearing and the other senses by territorialising computational sonic investigations within the visual field. The AR application receives data from the different modules of the system and attempts to embed the sonic knowledge of the AI model into the space. It acts as a visible force or entity created by sound and sonic investigations.

On The Couch, we are presenting a virtual soundwalk for exploring “Embedded/Embodied” over the web. The aim is to create an interactive documentation of this artistic research project where the system's learning epochs can be explored in a spatiotemporal manner. It is an archival diary of the system’s learning process and situated interactions with the local environment.

Visitors have the ability to navigate through virtual time and space, experience the emergent decisions and aesthetics of the system as it responds to the local field recordings. They also can compare the evolution of the system’s behaviours across different stages of its training process.

+
Image by Loes de Boer

What is acoustemology and why is it an important addition to our arsenal of ‘ways of knowing’?

Farzané: Acoustemology, a concept unifying "acoustics" and "epistemology," investigates sound as a means of obtaining knowledge. It delves into what can be known through sound and listening. Acoustemology explores the role of sound in cognition, perception, and knowledge construction, impacting fields such as philosophy, psychology, and technology. Acoustemology emphasizes the interplay between humans, non-human entities, technologies, and materialities within the sonic environment. The idea of developing a form of sonic experience, investigation, and creation that goes beyond the sense of hearing can be a great speculation framework for unifying human subjective lived experience, cultural histories as well as nonhuman actors and their interaction with each other.

Arash: To me, the most interesting aspect of acoustemology is the way it approaches knowledge or its epistemology. It’s not like other sonic knowledge systems that somehow try to replace oculocentrism with sonocentrism. It also doesn’t approach sound with an analytical mindset for developing some sort of generalised laws and theories by trying to reduce it to data or indicators. It can be said that the empiricist and positivist methods of modern science have the same analytical mindset. We become so preoccupied with mechanisms beyond phenomena like colors, sound, and love that our direct experience of these phenomena is somehow ignored. We try to quantify them into measurable units while the qualities that can't be expressed in mathematical formulas are neglected.

Instead, acoustemology tries to develop a form of knowledge that is contingent, situated, affective, and embodied. It accepts relational ontology that unites different senses, emotions, actions, histories, and symbolic cultural values. It’s more towards a way of living.

We can leverage this form of epistemology to speculate on our current reductionist, analytical, and quantitative approach to AI models that constantly try to generalize and objectify reality by reducing this infinite flux to captured datasets. Datasets that only contain the possible, filtered, and sometimes biased observations.

We can ponder questions such as: Can we develop a machine learning approach that is locative in cultural, ecological, cosmological terms? Is it possible to accept contingency instead of rigid, predictable automation? How can we incorporate diversity into our approach to designing and utilizing AI technologies? And in this situation, how can these models communicate with each other? Can we position these models within the environment as relational actors rather than external controllers dictating every aspect? Artists can engage with these questions, as they have the opportunity to work with machine learning beyond technological, pragmatic and scientific rationality.

The idea of developing a form of sonic experience, investigation, and creation that goes beyond the sense of hearing can be a great speculation framework for unifying human subjective lived experience, cultural histories as well as nonhuman actors and their interaction with each other.

+
Image by Loes de Boer

Since when have you been working with artificial intelligence? Why is this an important tool for your work?

Arash: I’ve always been interested in leveraging unconscious exploration, contingency, and incompleteness in my creative process. The idea of freeing my work from my choices and cultural, psychological, and emotional states to varying degrees is fascinating to me. This in itself is a form of collaboration and improvisation. The artist becomes an intermediary person between a specific or an unknown source and its representation. It’s as if the artist creates a system of translation for a source to reveal itself. It’s so sublime to me! Using computational generative art methods is a great way to do so. Although It’s more like a planned autonomy. You, as the artist define all the rules. Although it can become a complex system, you know exactly what the access codes are. However, using AI is like creating an artwork with an entity from a different umwelt. I don’t want to mystify or anthropomorphize it, but although the artist defines the problems, It’s the agent that decides how to solve them. It’s more like the difference between writing an expert system and training an unsupervised deep learning model. But it all depends on how you implement and use your machine learning models because unlike tech bros, predictable, fully optimized models with high performance are so boring to me. This is why I like to train my small models on my custom datasets with all their errors and insufficiencies.

Farzané: I started working with AI in 2020, as I was experimenting with computer music. Experimenting with generative environments and artificial life forms, I was captivated by the incredible evolution of my ideas. The constant dialogue with the computer became a driving force for my creativity, igniting a collaborative spark that enhanced my artistic process. The most exciting part was witnessing the emergence of entirely new soundscapes that were born out of this harmonious interaction. Each session yielded novel and unexpected results, pushing the boundaries of my musical exploration. What intrigued me was the ability to craft my very own collaborative compositional tool. With it, I could traverse uncharted sonic territories and discover unexplored depths of musical expression. Another aspect that fascinated me was the vast realm of possibilities for interaction. This enigmatic realm granted me the ability to create spontaneous compositions, where emergent sonic textures materialized in real-time interaction.

How does working with AI as an artist shape your understanding of it?

Arash: It all depends on how the artist works. For me, It’s demystifying. The artist can experience how this flow of abstractions can become disoriented and detached from reality. They can feel how socio-politically alienating it is to be controlled by a black box. But also, it’s so rewarding, weird and potent. I think it is the most mythical form of computation so far. It can be a tool for the exploration of the unknown. It’s capable of forcing you out of your comfort zone. But I don’t like to be a part of the hype. I don’t want to embrace automation, prediction, and accuracy. I try to go in the opposite direction of what corporate AI is trying to achieve.

Farzané: I completely agree with the notion of demystifying. Collaborating with AI allows us to gain valuable insights into its decision-making processes, data requirements, and training methods. This transparency plays a crucial role in our understanding of AI, enabling us to comprehend its limitations and potential biases. For instance, by working with AI, we become aware of biases that may be ingrained in data, algorithms, or decision-making procedures. Understanding these biases empowers us to comprehend the potential impact they can have on our systems. Most importantly, working actively with AI encourages us to adopt innovative and experimental approaches when we are fully aware of its limitations and potentials. However, it seems that not all artists are able to break free from the loop of creating superficial eye-candies without delving deeper into the possibilities. In my experience, working with AI has revealed its versatility and ability to fulfil various roles far beyond mere automation. As a result, I aim to promote speculative approaches, as opposed to employing AI in conventional ways.

+
Image by Loes de Boer

How do you train an AI to improvise - and how close does that come to human creativity?

Arash: I know your question is about the affordances, the synergy, and the creative sphere of machine decisions and human artistic choices within improvisational settings but I can’t help myself not to say what just came to my mind.

I think trying to compare AI to humans is the most wrong battle we are fighting. On one hand, tech bros promote their models by proving that they outperform humans and on the other philosophers and scholars in human studies, try to raise the level of humanity beyond what AI is currently capable of doing. It’s ridiculous and pointless.

I think we need to go back to the fundamental question of how we define humanity in our worldview, sometimes mystically!

There’s a short video clip of a panel discussion on AI by Elon Musk and Jack Ma. Although everyone makes fun of Ma for his shallow technical perspective on the subject he was somehow trying to articulate another cosmological view on humans and machines. I remember he said: “You shouldn’t play chess with machines. Don’t do that. It’s stupid!”. I liked that!

Farzané: I experiment with various ML algorithms and system architectures to explore free improvisation tailoring my approach to the context of each project. I curate and utilize my own datasets, enabling me to train the models with data that I personally provide. Such a blend of technical and artistic visions allows me to seamlessly convert technical decisions into artistic expressions, incorporating analysis approaches, algorithms, and creative constraints into the very fabric of these processes.

In my opinion, reducing the entirety of artistic endeavor, with its rich implicit semiotic and intersubjective qualities, into quantifiable data through the replacement of human artists with computer algorithms is a perspective that limits the true essence of art itself. Improvisational intelligence should not be solely defined as the analysis of data, as machines may surpass humans in terms of information storage and computational power. Yet, this reductionist viewpoint fails to recognize the profound impact that semantic and cultural aspects of human nature have on the creation of art. Furthermore, attempting to attribute human-like improvisation abilities to computers in frameworks such as improvisation, where social experiences, emotions, consciousness, and human cognition play vital roles, would be implausible. The relational human component inherent to improvisation renders any discussions about replacing humans with machines redundant. Instead, the focus should lie in exploring the aesthetics, possibilities, and novel forms of collaboration that such systems can offer, as they possess entirely different worldviews and affordances.

The question that piques curiosity in me is not whether machines can exhibit human creativity, but rather to explore how human-machine interactions can lead to the development of aesthetics or epistemologies that are based on the combined agency of both entities. This entails emphasizing the significance of the relationships between the individual components and the overall system, as well as the formation of space within this context. By embracing this perspective, we can utilize artificial agents not as substitutes for human improvisers, but as catalysts to uncover new artistic avenues and foster innovative partnerships, enriching our creative landscape.

This entails emphasizing the significance of the relationships between the individual components and the overall system, as well as the formation of space within this context.

+
Image by Loes de Boer

What do you expect from a project that has iterations in both the physical and virtual worlds?

Arash and Farzané: Each world has its affordances. The main challenge for me is the way we adapt the project to these affordances instead of just copying and pasting it. No matter how hard we try, digitalizing reduces a physical thing, and detaches it from the flux of physicality.  As an example, we need to reduce human relations to sharing information in order to fit it into digital networks. On the other hand, digitalisation creates a very weird form of plasticity, there’s no time and space, and it's always here and now while everything morphs into something else in the blink of an eye. And that’s what we did when we wanted to create a digital iteration of “Embedded/Embodied” for The Couch. The soundscape is fixed, it’s a specific timeframe looping while we have an archival diary of the AI agent’s learning process to explore how the model responds to the same soundscape at different learning epochs. It’s different from the physical version because here we have a dynamic spatiotemporal setting where the soundscape is in a fixed timeframe while the agent is evolving and we as visitors can jump into different learning time frames to experience how the space sounds at that moment.

What worries you most about AI, and what is the most positive thing you see coming out of it?

Arash: Predictions can ruin the concept of the future and turn it into a constant now! Datafication can destroy the meaning of life by its inability to capture the flux of existence, by its ignorance in understanding the incomputable. Social debates, political decisions, and human errors with all their pros and cons may be replaced by machines that solve statistical problems. It forces everything to become units of calculation, homogenized and quantifiable.

Farzané: But we think our main crisis is not the computation but the worldview behind its development. An AI model is constantly looking for extraction, optimization, and efficiency. But at the end of the day, who defines these parameters? What is the meaning of efficiency? Efficiency for the benefit of whom, and at what cost? We can’t ignore it as we couldn’t ignore the hegemony of the current form of social media, which was modeled after a neoliberal idea of the atomized society. But it doesn’t mean that we are useless. We have the power to reject, to hack, to speculate on alternative methods. No matter how persuasive, shiny, and polished a design is, if there’s no one to use it, it is a failure. Ideally, if we rethink our values, we can rethink what we expect from the technology and that can be a big step.

+
Image by Loes de Boer
We collected all sources in one place, so you can browse through them all.