Lara Dal Molin
University of Edinburgh, Scotland
L.Dal-Molin-1@sms.ed.ac.uk

From Pretoria to Amsterdam: Discussing Decolonial Practices at EASST-4S?

Lara Dal Molin

On Monday the 15th of July, I reached Amsterdam with an overnight flight from Cape Town. I had just attended the 2024 Global Humanities Institute in Design Justice AI: a two-week “summer” school (but actually the dead of winter in the Southern hemisphere) at the University of Pretoria, South Africa, centred around community-oriented and decolonial practices in generative artificial intelligence. Professor Kwesi Kwaa Prah’s words still echoed in my mind as I strenuously pulled my overweight suitcase on the Sprinter train towards Amsterdam South. His talk focussed on what he referred to as ‘the language question’, situating language the central feature of culture. “The moment your tongue is taken out of your mouth and replaced with another”, he said with emotion, “you become a different person”. Text-based generative systems called Large Language Models (LLMs), such as ChatGPT, are available in over a hundred and fifty countries worldwide but only support a few dozens of ‘popular’ languages. Across the African continent, people often interact with ChatGPT through colonial languages such as English, French and Portuguese. However, those words also alluded to a disturbing past: in 1974, South Africa passed the Afrikaans Medium Decree, mandating all traditionally Black schools to use Afrikaans and English as official languages of instruction. The images of the Soweto uprising, displayed in the Apartheid Museum in Johannesburg, flashed before my eyes. In just under eleven hours, I was no longer a visitor in a country troubled by decades of institutionalised racial segregation, but walking through the former headquarter city of the Dutch East India Company. “It is not colour that will save us”, Professor Prah said, “it is our language”.

As a graduate student, I have been researching LLMs since September 2021 – a year and two months before OpenAI released ChatGPT – with a particular focus on investigating gender bias in artificially-generated text. Within the complex matrix of domination, first described by Collins (1990), gender is a single dimension within the broader spectrum of intersectional oppression, and influences other systemic issues including race, ethnicity, social class and colonial history. This was my first ever EASST-4S Conference and, with the theme this year being ‘making and doing transformations’, I was especially interested in interrogating the role of Science and Technology Studies (STS) in informing conversations on decoloniality within the context of emerging technologies. As I browsed the extensive and somewhat overwhelming conference program and attended the initial keynotes, I came to understand that an additional focus of the conference was, indeed, decoloniality. On the conference website, a thought-provoking question had sparked my interest: an invitation to consider how attendees could become part of making and doing contributions to transformations through mobilising STS sensibilities. In that moment, I recalled an enlightening exchange at the Design Justice AI Institute, where a fellow speaker described decoloniality as “a mode of life, a mode of challenging hegemonic systems – a sensibility”. Could these sensibilities, STS and decoloniality, speak to one another? How could they come together in conversation? Further, it is indeed what Professor Prah referred to as “the language question” during his lecture at the Institute that presently imbues the development of generative artificial intelligence systems. In this context, what could the combination of STS and decolonial sensibilities look like and what kind of reasoning could it inform? I would spend my time in Amsterdam, within and beyond the conference, looking for answers to these questions.

With Professor Prah’s ‘language question’ still in my mind, I attended a panel on LLMs and the language sciences. The first presenter considered the problem of alignment in artificial intelligence, in this case whether generative systems can successfully align with human values. To my curiosity, the speaker contextualised this problem through introducing – in my opinion – a far more interesting one: that of normativity. Building on Jakobson and Halle’s (1956) concept of linguistic anomalies, their presentation illustrated that, after an initial training, LLMs must be aligned with human values through the superimposition of normative structure onto their statistical model (Hristova, Magee and Soldatic, 2023). This practice of superimposition, which frames human input as an instrument of normative constraint, reformulates the problem of alignment as one that inherently considers the social and cultural dimensions of language. I started to wonder: what kind of cultural and social normativity could an individual possibly superimpose on a statistical model? Within feminist STS and gender studies more broadly, it is commonly understood that individuals are socialised to perform normativity since birth, based on a specifically situated social and cultural context (Butler, 1990). “Language”, Professor Prah argued in his lecture, “is the central feature of culture”. Therefore, any individual could only possibly reinforce a generative system to reproduce the kind of normativity they themselves experienced throughout their lifetime. However, based on the lessons learnt at the Design Justice AI Institute, the corollary of this understanding is that the normativity individuals impose upon generative systems, which the model then propagates through countless real-world scenarios, contributes to the reproduction of colonial mindsets. As Winston Churchill famously declared in 1943, “the empires of the future will be the empires of the mind”.

In my EASST-4S talk, I opened my presentation with what I refer to as a ‘statement of purpose’ – perhaps a way of legitimising my presence in a room and a ready-made answer to the question that haunts the nightmares of most PhD students – what are you actually doing? I stated that:

This project attempts to shift the way we conceptualise Large Language Models (LLMs), from omniscient tools that stand on the epistemological pedestal of scientific knowledge production, to opportunities for participation and co-design. It proposes methods that redistribute user agency when interacting with LLMs and subvert deterministic on algorithmic fetishism.

Following, I outlined what I had learnt from my time at the Institute in Design Justice AI, crucially, that debiasing models often implies further exploitation of human and nonhuman resources. While technical papers champion the prospect of producing general artificial intelligence, large technology companies outsource exploitative content moderation practices to the African continent, where local data labellers work long and poorly remunerated shifts to categorise toxic context without any psychological support, with the objective of improving the functionality of their models (Bubeck et al., 2023; Perrigo, 2023). In a contextual landscape in which data work is often invisible and taken for granted, and in which humans are alienated from the technologies they create and interact with, my project attempts to frame prompt engineering – the process of crafting input text for LLMs – as an opportunity for co-design, community participation, and resistance from the forms of intersectional oppression that some technological artefacts and infrastructures perpetrate. Although my work positions itself as part of a small family of methods that attempt to redistribute power in human interactions with LLMs, I urged the audience to consider the full spectrum of participation and abolition in relation to technologies that embed systems of oppression. Beyond the opportunity to connect with a panel of outstanding researchers, perhaps the most enlightening part of this experience was a question I received from the audience. The landscape currently surrounding artificial intelligence looks bleak, they acknowledged, but can participatory methods truly be a way of establishing human agency in our relationship with artificial intelligence? In other words, what are participatory methods good for?

Despite my initial panic at these questions, thinking that I didn’t have a statement of purpose for this occasion, a sudden certainty and calmness came over me. When my hands reached the microphone, I heard myself say that participatory methods are not solely significant for the relationship they allow to establish with technology, but especially for the one they allow us to create with one another. The person in the audience gave me an affirmative nod, indicating perhaps that we shared some common understanding, as if I had known this as a fact for a long time. EASST-4S was the first time where I stood in front of a crowd both so large and so welcoming at the same time and where, also for the first time in my PhD, I felt part of something greater than a single project or a single Department – a shared tradition, a shared curiosity, a sense of belonging.

While still attempting to make sense of these realisations and process the gratitude I felt towards the audience and fellow presenters, the panel dispersed, and I followed my friends and colleagues as they hurried into the Aula: Geoffrey Bowker was about to speak. The talk, titled ‘where do infrastructures come from?’, began by considering the nature and origin of infrastructural continuity. Some minutes into the presentation, Professor Bowker remembered his late partner and collaborator, Susan Leigh Star. I was profoundly touched by his tears, which spoke not only of an intellectual bond but a human one, one that was – and is – made of love. With this year marking the tenth anniversary of the release of the cinematic masterpiece that is Interstellar, this moment reminded me of the moving celebration of love throughout the film, as the one feeling that can transcend space and time, one that does not go gently into that good night (Thomas, 1951). Citing Donna Haraway, Bowker proceeded to question the persistence of STS in distinguishing between technology, nature and society, when machines are merely another human strategy for autopoiesis. While the talk overall spoke to the disciplinary field that those in the Aula shared, I couldn’t help but let it speak of love, care and the existential bond that ties together all forms of life and culture, across space, time and different sides of history, across walls and other fictitious infrastructures.

On the final day of the conference, which happened to coincide with one of the largest global outages in the history of information technology, a group of Vrije Universiteit students marched through the campus to raise awareness of the ties between the university and the ongoing genocide in Palestine. Compared to South Africa, where the apartheid – an Afrikaans word meaning ‘apartness’ – formally ended in the early 1990s, Palestine also has a long-standing history of institutionalised segregation, occupation and violence that continues through today.

Vrije Universiteit, in conversation with the police, stopped the unannounced protesters from accessing buildings. This also meant that the ability of attendees and delegates in reaching their sessions was limited. During those final hours, the relevance of everything I just learnt became evident. The superimposition of normative structure onto statistical models implies that anything that deviates from the norm is marginalised and left behind. In my mind, Geoffrey Bowker and the Vrije Universiteit students marching through campus had a conversation. Suddenly, I had some answers to my questions. While STS sensibilities might bring to the forefront our positionality and reflexive practices, a decolonial sensibility proposes an additional, informed shift of focus from ourselves towards the relationships we cultivate with others and otherness. To prevent this otherness from transforming into ‘apartness’, STS must revive its commitment towards historically – and statistically – marginalised forms of knowledge and experience, while questioning the normativities engendered by its very practice.

I wish to bring this perspective to the broader EASST-4S community: that these themes deserve not only greater, but an official, institutionalised space in our conversations, and that alternative – radical, antagonistic, sometimes revolutionary – knowledge does not deserve a closed door, but a seat at our conference. Our work as researchers cannot be decoupled from its political significance. My experience at EASST-4S highlighted that we, as individuals and as a collective, are not merely bystanders to technological development, societal challenges and revolutionary transformations, but actors. These lessons I learnt are part of the reason why I am writing this contribution. Additionally, in the context of generative artificial intelligence, and in my ongoing work, I plan to incorporate some of these ideas and perspectives in a panel submission for 4S 2025. Co-organised with University of Edinburgh colleagues, this panel will explore artificial intelligence as a ‘broken machine’ and centre technological failures as sites for care and sociotechnical change. My wish for 4S is to continue the conversations started last summer at EASST-4S, to share some of the thoughts described in this article, and for these to bring us closer as a community of researchers and practitioners.

References

Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4.[Online] Available at: https://arxiv.org/abs/2303.12712 [Accessed 7 December 2023].

Butler, J. (1990). Gender Trouble. New York City, New York, United States: Routledge.

Collins, P. (1990). Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. Boston, United States: Unwin Hyman.

Hristova, T., Magee, L. & Soldatic, K. (2023). The Problem of Alignment. [Online] Available at: https://arxiv.org/html/2401.00210v1 [Accessed 16 October 2024].

Jakobson, R. & Halle, M. (1956). Fundamentals of Language. 2nd ed. Berlin, Germany: Walter de Gruyter GmbH & Co.

Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPTLess Toxic. [Online] Available at: https://time.com/6247678/openai-chatgpt-kenya-workers/ [Accessed 23 May 2023].

Thomas, D. (1953). The Collected Poems of Dylan Thomas. New York City, New York, United States of America: New Directions.

Author biographies

Lara Dal Molin is a PhD student in Science, Technology and Innovation Studies at the University of Edinburgh, part of the Social Data Science joint partnership with the University of Copenhagen. Her research examines gender and intersectional bias in text generated by open-source Large Language Models through a combination of qualitative and quantitative methods.