Ben Bogart | Interview | Chantier IA
Ben Bogart is a Canadian media artist with a career spanning some 25 years, 20 of which have been dedicated to machine learning (ML) processes and AI. Residing on the territories of the Musqueam, Squamish and Tsleil-Waututh First Peoples (aka Vancouver), Bogart holds master’s and doctorate degrees from the School of Interactive Arts and Technology at Simon Fraser University. There they developed, during their master’s, a site-specific artwork that uses images captured live in the context of installation as raw material in its “creative” process (2006–2008). For their doctoral thesis, they created A Machine That Dreams, a piece framed as both a model of dreaming and a site-specific artistic work manifesting an Integrative Theory of visual cognition (2009–2014). Their recent work involves building Machine Subjects that appropriate and reconstruct cultural artifacts using artificial intelligence.
I had the chance to discuss their artistic practice and many aspects around ML and AI, notably how boundary-making processes defines structures and concepts; the notion of agential realism and its relation to their practice; references to popular culture as material in their work; and how meaning relates to reality and discourse.
Enjoy!
Nathalie Bachand
Hi Ben, so glad we’re having this exchange about your work and everything surrounding it. I just read your artistic statement once again, to wrap my head around it a bit more tightly. Machine learning as boundary-making really seems to be a central theme. I’d like to start by asking you to define that notion of boundary-making, in itself as well as in the context of ML.
I was always implicitly interested in boundary-making. In the late 90s I did some work with cutting Gaussian functions to create smooth forms (implicit curves). As long as I can remember I’ve had this sense that there’s an underlying continuity beneath everything, such that the hard-edges of concepts and objects are imposed or even illusionary. Looking back, this sense of continuity is the foundation of my gender identity (non-binary and agender), by definition at odds with the gender binary. It took me 25 years to work through my internalized transphobia and accept my gender. In 1999 or 2000, I made a video artwork called FeHo, inspired by a book (whose name now escapes me) written by a trans woman that referenced Ze/Zir pronouns. So even 20+ years ago I had the language, and even the confidence, to produce this kind of work without having actually processed my own gender. That said, I believe this also links into my neurodivergence: I tend to understand things in multiple dimensions.
Early on in my practice, I was interested in interactivity, which I now understand as a way of me rearticulating the boundaries between myself, the viewer and the work. My interest in site-specificity is a reconfiguration of the boundary separating the viewer from the world around them. I remember wondering why an interactive artwork is supposed to respond only to people and not other animals, or plants even. Eventually I lost interest in interactivity because of all the baggage around control, viewer expectation and usability. I wanted to explore the boundaries of autonomy and authorship as enabled by an artistic practice involving technology, which was highly inspired by David Rokeby. It was my interest in site specificity that brought me to machine learning in the first place. I saw ML as a solution to the problem of automating relationality and site-specificity: media artworks create their “own” relation to place and do so without being neither mere reflection nor random or independent, but something in-between—something that is a surprise.

Aporia, 2001
My interest in site goes all the way back to my BFA thesis project (Oracle, 2003), which used live weather data mapped (using chaotic math equations) to a text in multiple languages. In 2005, with my first Canada Council grant and collaboration with then-architecture graduate student Donna Marie Vakalis, I started using cameras to collect images of the environment around an artwork. Boundary-making was always generative for me; the boundaries that define concepts, ideas, classes, objects, etc. are dynamic and situated in a field of continuity or even potentiality. Throughout my professional practice I’ve been thinking about art as a question of meaning rather than an application of it. I described this as “doing over representing” and as being interested in the ways that words, images and ideas have meaning. Derrida certainly inspired me along that path.
When it comes to machine learning, there are two major approaches: supervised and unsupervised learning. In both cases the purpose is to model concepts, such as taxonomy. You take a bunch of measurements, say of the length and width of the petals of iris flowers. Then, with supervised learning, you tell the system which measurements go with which species (by labelling them), and with enough data, the system can classify new measurements (which it has not been exposed to and has no label for) as belonging to a particular species. This is basically what all the plant (bird, etc.) “recognition” apps do: classify based on measurements by the camera phone. While I’ve used supervised learning in a couple of projects, I’m much less interested in tools that require the explicit labelling of data samples. Similarly, I realized that I’m much less interested in prompt-based interfaces for AI, because they depend on pre-determined and static links between the word and the measurements (i.e., labels).
Unsupervised learning (also known as clustering) is likewise about categorizing measurements, albeit without labels. Rather than associating a range of likely measurements with each species (label), measurements are considered in relation to each other (according to similarity); a cluster is one group of measurements that are more similar to each other than they are to another group. What is more interesting to me is the underlying continuity. Every measurement is similar to every other measurement to some degree. Boundaries are formed where there is a bigger gap of distance between measurements, where the size of the gap is a predetermined threshold decided by the person executing the algorithm. There is no single optimal clustering structure (model) for any dataset. In other words, each unsupervised learning algorithm is like a specific perspective that imposes particular constraints.
This tension between continuity and the emergence, dynamism and selection of boundaries is a key aspect of all my work with machine learning, and also inseparable from my gender and neurodivergent identities. Since 2019, I’ve been reconsidering my previous conception of Machine Subjectivity (subjectivity as boundary-making) in relation to Karen Barad’s agential realism, which is a long-term and ongoing project.
Are you suggesting that boundary-making, out of unsupervised ML, shares some similarities with agential realism—as if the boundary-making process was a living ecosystem, constantly changing and adapting itself to the changes of measurements? I’d like to hear from you on this …
I think that both unsupervised (clustering) and supervised (classification) approaches to ML are inherently boundary-making processes, always grouping measurements according to their similarity or conceptual boundaries (such as species). Agential realism is also about boundary-making processes, specifically boundaries that determine what is measured from the apparatus of measurement. I just submitted a book chapter diffracting agential realism through machine learning, co-written with one of Barad’s former doctoral students, Dr. Elia Vargas. In a pre-press version of that chapter, we contend that boundary-making “is enacted by agential cuts, where the boundaries and properties of measurements are mutually determined with/in the apparatus. Objects do not preexist” (italics added).
Under agential realism, fixity is a temporary configuration where objects and relations are dynamic and constantly reconfigured. The framework of a “living ecosystem” consists of a particular understanding that enacts particular boundaries regarding what constitutes “living” and “ecosystem.” Agential realism is not an ecological model. However, if we diffract the concept of an ecosystem through agential realism, then it is a dynamic set of relations that allows for the co-construction of its members; the demarcation of a “member” from the “ecosystem” itself is a boundary-making process. The ecosystem allows for the creation of its members as much as its members also allow for the creation of the ecosystem; one does not preexist the other.
We (of colonial cultures) tend to think about models as the results of measurement with a clear causality, where the model “adapts” to measurement. This conception constructs a boundary between the modelling process and data collection such that the details of collection need not be considered in the modelling. In other words, data is “ground truth” and unaffected by modelling. The fixity of this boundary is challenged by agential realism, which offers the potential for a richer way of understanding ML, including its possible harms: Data collection and modelling are co-constructed and not separate; they are part of an integrated apparatus that determines what matters.
This is why I make no separation between the current “AI” explosion and surveillance capitalism; big data and big AI are co-constructed. Methods of collecting data and modelling are developed in tandem. The current backlash against “Generative AI” by artists feeling appropriated seems totally disconnected from the 15‒20 year history of artists posting on corporate social media. In using these platforms, we’re enabling the very same logic of extractivism and giving them near open-ended license to do almost anything with what is posted, in perpetuity. AI is understood by these artists, and a broad public, as a bounded object that artists can rail against. I think this merely distracts from the underlying and already entrenched relations of extractivism under surveillance capitalism. Through agential realism, the ethics of data collection are inseparable from the ethics of modelling.
Could you anchor this last idea—that the ethics of data collection are inseparable from the ethics of modelling—in a work you’re currently developing?
When I write about this boundary between collection and modelling, I’m not (only) speaking about my own practice with AI but also ML in general. My integration of agential realism and ML is not merely an academic exercise for myself and effecting my own practice but an antidote to contemporary “research” that is inherently extractive and exploitative. The consideration of data as “ground truth,” a bounded object that can be extracted from its naturalcultural context, facilitates an extractive mindset. Through this mindset, data is valuable and valid in itself; all the messy details of how it comes to be can be easily ignored. Data is not unlike oil, lithium or cobalt; it’s convenient to understand these things as isolated objects while ignoring the systems, consequences and harms caused by the extractive processes that allow them to be. For example, we can consider a model such as Stable Diffusion to be an object while ignoring the labour, electricity as well as care and respect for the people that allow that training data to exist. When I say that the ethics regarding data collection and modelling are inseparable, I assert that in using Stable Diffusion, for example, one is being complicit and implicitly validating those practices of data collection.
My relation to machine learning over such a long period has certainly contributed to my current thinking. When I started using ML there were no models to download, few training sets and almost no frameworks or libraries. The practice of using ML was necessarily a practice of training one’s own models, and thus I created my own datasets. As I said earlier on, it was my interest in site-specificity that brought me to machine learning. Data collection is built into my conception and use of ML from the outset, where works often involve collecting lens-based images of and in the environment. I aspire to create works where lens-based practices serve as a method to create relations between the machine and the world around it, as opposed to an extractive ideology where images are objects considered independent of their contexts and valuable in themselves. Each image, data point, etc. situates the artwork within its visual context, and in a number of my works, I place the generated images back into the very contexts from which their sources were collected.
In my most recent project, Peek & Play Magic Maker, a collaboration between myself, Faith Holland and her four-year old daughter Hildy, we take this emphasis on care and relationality to the next level. Here, the dataset is comprised of a set of artworks, largely drawings, paintings and collages, made by Hildy over the last three years. Built into the collaborative process is a sense of care and respect for Hildy; she is not a “data source” to be extracted from but an active agent. Throughout the process, Faith checked in with Hildy, who was very protective of her works, especially in their physical forms, regarding how she felt about our variations, explorations and developments. While I don’t have a direct relationship with Hildy (I have yet to meet Faith or Hildy in person), the various tasks of data preparation do feel like acts of care. The dataset is not only for Faith and me but also an archive that could be of value to Hildy herself as she gets older. The plan was to train our own custom GAN and diffusion models using this data. However, unable to secure sufficient grant funding, we ended up having to use off-the-shelf tools. For example, we fine-tuned style transfer models on small subsets of the 411-image dataset in RunwayML to create a set of AI outputs for the final work.

Peek & Play Magic Maker, 2024
Sampling of AI-generated drawings from Hildy’s style.
Peek & Play Magic Maker exemplifies some of the challenges which artists using AI must tackle, be it artistically, socially or technically. There is an internal tension in the work between our emphasis on care, respect for Hildy’s agency and the tools. RunwayML and Stable Diffusion are closely entangled, and the latter is trained on the LAION-5B dataset, an open access training set that has been confirmed to include images of sexual violence, including against children. For me, this tension highlights how challenging an ethical practice with AI actually is. Training one’s own models from scratch is a massive privilege and requires significant knowledge as well as access to expensive hardware, whereas tools like RunwayML are accessible and relatively inexpensive. Indeed, the ethics of data collection is inseparable from the ethics of modelling, and that also goes for the histories of computational tools themselves, inseparable from military research. One can’t just opt out; violence, oppression, colonialism, extractivism, etc. are entangled with nearly everything in our contemporary world. By articulating a relationship between collection and modelling, between military logic and computation, my hope is that we all (collectively and individually) become more mindful of where we set the boundaries that determine what matters, and also that we look not only at the objects but also, above and beyond them, at the relations that allow these objects to exist. This is not just a matter of seeing but an opportunity to take responsibility for where we collectively uphold boundaries that determine who and what matters.
Let’s talk a bit about another aspect of your work: popular culture. From Watching and Dreaming (2001: A Space Odyssey) (version 1), created in 2014, to the more recent Machines of the present consume the imaginations of the past, from 2020/22, popular culture is a recurring theme. What draws you to make popular culture one of your main materials?
In the early aughts, I came across 24 Hour Psycho (1993) by Douglas Gordon, which planted the seed of the possibility of using artistic practice to render familiar cinema alien. There is something in the defamiliarization of pop culture that draws me, in other words, taking something quite familiar and ubiquitous and troubling it (which brings us back to the topic of boundaries and gender).Just a few days ago, I was thinking about my use of the word “epistemology” in my artist statement; formally, epistemology is the study of knowledge-making. As an artist and image-maker, I understand images as manifestations of knowledge and I’ve always used image-making as a method of questioning over expressing. For me, epistemology becomes a study of how images come to be and the ways in which they relate to the word. With my work integrating agential realism, and in relation to my decolonizing efforts, relationality is really the centre piece of my process. Images are not interesting in themselves so much as in the ways that they reflect a broader context, visual, material and cultural.
Much of my work over the last nearly twenty years follows two trajectories: landscape and the appropriation of dominant Western culture. In both cases, the effort is relational. A landscape (site-specific or not) is an articulation of the specifics of place, which goes back to Resurfacing (2006) as well as more recent works like A diffraction of past/stability and present/dynamism (2021/22). I see my use of cultural appropriation in a similar vein; rather than situating a work in a specific geospatial context, the work is situated in a cultural history in which I’m already embedded. Kubrick’s 2001: A Space Odyssey was my first choice because it is such an influential work that articulates foundational questions about AI. For example, to paraphrase and amplify (through square brackets) the fictional character HAL 9000: “This mission is too important for you [puny human] to get in the way”; right there you have AI as optimization and the implicit emphasis of some metrics of success over others (such as the preservation of mere human life). The triptych of films I choose for the Watching series highlights a range of pop-cultural conceptions of artificial agents through an era (1968–1982) that serves as the backdrop for AI researchers working today. These films would likely have been part of the cultural backdrop of the childhoods, adolescences or early twenties of folks like Hinton, LeCun and Bengio.
The Machines of the present consume the imaginations of the past body of work was always articulated in relation to painting. Thus, the appropriation of ubiquitous paintings from the Western canon served to, once again, create relationality within that cultural context and also a subversion of that history. The paintings selected for that series emphasize both fame (such as the Mona Lisa) and a breadth of image-making movements on the continuum of realism to abstraction, where The Zombie Formalist takes the place of 1960s geometric abstraction. In both landscape and cultural appropriation, I use machine learning to build relationality through decomposition and recomposition; source material is deconstructed into fragments, or even individual pixels/samples, that serve as the raw material from which new structures are formed.
Within this perspective, where the “source material [is] deconstructed into fragments,” would you say that AI tools operate as the reverse engineering of reality, capable of infinitely producing new levels of meaning—thus emphasizing impermanence? This seems to me very related to your work and the way AI interacts and is situated in our world.
Reality is an ongoing material-discursive re-articulation (re-configuration). It is constantly made, unmade and remade through the apparatuses that allow for a shared reality (where perception, cognition, systems of measurement, etc. are all apparatuses). Just as reality is always being remade, so is meaning. Agential realism collapses those things: reality (material) and meaning (discourse) are an integrated whole. An apparatus creates the properties of reality while also determining concepts that make those properties meaningful. Meaning-making and reality-making are the same integrated, continuous, relational process that is never fixed.
This relationship to meaning is one of my artistic (and cognitive) struggles. I’ve sometimes been asked what artwork X “means,” or told that I don’t know what, at its essence, X is about, or even that X may be “about” too many things. For me, an artwork does not have meaning but rather is an opportunity for meaning to be. Of course, there is my intention, the specific mathematical / computational / statistical tools I use that go into a work. But I don’t think works are foreclosed. By that I mean that what they are and their “meanings” are always being re-articulated. A work is a temporary stabilization of an ongoing becoming, not something complete. My work with agential realism is a re-articulation of my work prior to agential realism (or was my work always pointing to agential realism, and I just didn’t have the apparatus to realize it?). There is also that question of the “object”: Is the artwork the material piece displayed in the exhibition? The code on a computer? The energy states of tiny magnets in a hard drive? My understanding of it? The viewer’s experience of it? The artwork is all of those things, and all of those things are dynamic and mutable.
This is why I tend to work in series and explains how different works follow from others throughout my ongoing practice. There was even a period where I would create second or even third iterations of an artwork, making different technological choices each time. Consider Step & Repeat (2003) in relation to Step & Repeat #2 (2009). If I had a major retrospective, I would probably have to re-implement some older works because technology has changed over time. A computational artwork exists in relation to operating systems, libraries, computer hardware, etc., and those relations must be maintained for the work to continue to exist. We’ve long since had this idea of the “impermanence” or ephemeral of the digital; but I think it’s permanence that is the illusion. Even the construction of the concept of “permanence” requires an apparatus that embeds specific constraints. How much change qualifies as change and over what time scale? Is the “thing” still the “thing” after it’s been taken apart? And put back together slightly differently? Taking a long view, or even a highly up-close one, it seems clear that everything, from ourselves to even existence itself, is dynamic and ever changing.
For our last question, I can’t help but ask: Is our vision of the future with AI and ML a fantasy? From your point of view, what does this fascination say about us as a civilization?
The future is always a fantasy. Our relationship to AI/ML is characterized by an ongoing reconfiguring, as an expression of power, and is not foreclosed. Art, culture, technology, even language are all tools for making sense of ourselves, and always have been. Victorians imagined minds as complex mechanical machines, and many of us now imagine minds as digital machines. This makes sense within colonial cultures, where we understand ourselves as independent objects rather than as open-ended, plural, contextual and relational, as many Indigenous peoples might. Under our colonial civilizations, understanding is the division of ourselves (and the land) into objects to be understood in isolation. This “understanding” is not neutral; it is employed in the service of control and value-extraction. We, within colonial cultures, create AI to extract value from training data, not unlike mines extract value from the land. The raw (largely “worthless”) material is processed, amalgamated and concentrated to create something of extreme value, according to very narrow conceptions of what is important. The AI model is not just valuable as a rarity, which it certainly still is, but also for its capacity to hold predictive power over those from whom its data is gleaned (i.e., it has a “functional use,” just like oil). Hence, the mining of data and training of large AI models are expressions of power and reinforce existing power dynamics through the accumulation of wealth. The models and their extractive processes maintain the status quo, as can be said of oil.
Is Artificial General Intelligence, aka “AGI” (with general used in the sense of “universal,” i.e., one universal, single kind of intelligence), a colonial fantasy of the disembodied, individual, rational agent? Of course it is. What other AI could possibly result from the massive concentration of capital made possible in a colonial extractive-capitalistic context? As long as we understand ourselves as isolated individuals and rational agents, our technologies will reinforce that ideology. AI will uphold narrow conceptions of intelligence, white supremacy and the ongoing and casual devaluing and othering of the very people, animals, plants and lands that are the very basis of “our” “civilizations.” The concept of “civilization” itself depends on a boundary that excludes economies, social structures and cultures that don’t fit the Western mold. Right now, we have students across the country occupying space to highlight the oppressive, violent and extractive ideologies that drive genocide in Gaza, where AI is being used to find “terrorist” targets in civilian populations. Similarly, child labourers in the Democratic Republic of the Congo participate in the extraction of some of the very materials that allow for contemporary technologies. This is technology under colonialism.
I have hope in the future because I follow and am deeply inspired by projects such as the Abundant Intelligences research program. I’m able to co-imagine a future of relational, pluralistic, consensual technology that entails rebuilding, decolonizing and indigenizing ideologies, practices and understandings, under the stewardship of people whom colonialism has marginalized over hundreds of years, particularly Indigenous and Black folks. In solidarity we can build a radically inclusive and equitable future.