
Originally published: 29 Oct 2025
Guest: Dr. Andrés Domínguez Hernández | Ethics Fellow, The Alan Turing Institute | Visiting Senior Lecturer, Queen Mary University of London
Dr. Andrés Domínguez Hernández is an Ethics Fellow at The Alan Turing Institute and Visiting Senior Lecturer at Queen Mary University of London's Digital Environment Research Institute. With a PhD in Science and Technology Studies and a background in engineering and innovation policy, he examines power, justice, and ethics in AI and data-driven innovation. Previously a Senior Research Associate at the University of Bristol and Director of Technology Transfer at Ecuador's Ministry of Science, Technology, and Innovation, Andrés brings Global South perspectives to questions of responsible innovation. He contributed to the Council of Europe's HUDERIA methodology for human rights impact assessment and recently presented on systemic AI governance challenges at UNESCO's Global Forum on Ethics of AI in Bangkok.
Topic: Systemic Power and Techno-Colonialism in Global AI
In this episode, we explore:
Systemic versus downstream concerns: Why current governance focuses on safety and bias at deployment while ignoring upstream issues like infrastructure control, supply chain exploitation, and industry concentration
Power concentration in practice: Infrastructure control as governance, corporate encroachment into public systems (Palantir and NHS), and why countries with smaller GDPs can't effectively regulate major tech companies
Global South as testing ground: How risky AI applications deploy where regulation is weakest, from Open AI's World Coin biometric collection to educational technology harvesting children's data
Epistemic dominance: Foundation models embedding Western epistemologies globally, creating homogenization where similar prompts yield similar outputs regardless of cultural context
Hype as material force: Self-updating prophecies that attract investment through claims about AGI, shaping resource allocation and governance priorities toward existential risks over present harms
Human rights framework: The Council of Europe's HUDERIA methodology for assessing AI across the technology lifecycle, from design through deployment and mechanisms for redress
Counter-power and world-making: Examples from the Global South including Masakhane's NLP work, Lelapa AI's small language models, and the importance of moving beyond critique to imagine alternative futures
"When we critique technology, it's not the technology itself that we are critiquing, but the way it is organized and the way it is extracting value to favour a handful of companies around the world."
Episode length: 1 hour 30 minutes
Connect with Kamini:
https://www.linkedin.com/in/kamini-govender