Home
Categories
EXPLORE
True Crime
Music
News
Education
Society & Culture
Technology
History
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/dc/8a/cb/dc8acb4b-896d-8476-5845-ee142cb4c461/mza_4879641455555605601.jpg/600x600bb.jpg
AI Ethics Navigator
Kamini Govender
8 episodes
1 week ago
AI Ethics Navigator examines how emerging technologies are reshaping society, power, and everyday life. Hosted by Kamini Govender, each episode features researchers, advisors, and practitioners — including voices from UNESCO, the UN AI Advisory Body, and the Alan Turing Institute — exploring what ethics looks like in practice when technology meets society. Thoughtful, unhurried conversations linking ideas across disciplines and geographies, with Global South perspectives often missing from mainstream discourse. Connect with Kamini: https://www.linkedin.com/in/kamini-govender
Show more...
Society & Culture
RSS
All content for AI Ethics Navigator is the property of Kamini Govender and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI Ethics Navigator examines how emerging technologies are reshaping society, power, and everyday life. Hosted by Kamini Govender, each episode features researchers, advisors, and practitioners — including voices from UNESCO, the UN AI Advisory Body, and the Alan Turing Institute — exploring what ethics looks like in practice when technology meets society. Thoughtful, unhurried conversations linking ideas across disciplines and geographies, with Global South perspectives often missing from mainstream discourse. Connect with Kamini: https://www.linkedin.com/in/kamini-govender
Show more...
Society & Culture
Episodes (8/8)
AI Ethics Navigator
Episode 8: Guest: Jasmina Byrne | Chief of Foresight and Policy, UNICEF Innocenti – Global Office of Research and Foresight

Originally published: 19 November 2025

 

Guest: Jasmina Byrne | Chief of Foresight and Policy, UNICEF Innocenti – Global Office of Research and Foresight

With over 25 years of experience in research, policy advocacy, programme management, and humanitarian action, she currently leads UNICEF's work on global foresight and anticipatory policy, covering topics such as frontier technologies, governance, macroeconomics, markets, society, and the environment. She is a lead author of UNICEF's annual foresight publication, Global Outlook for Children, and co-authored UNICEF's Manifesto on Children's Data Governance. Previously, she managed UNICEF's Office of Research portfolio on children and digital technologies, child rights, and child protection.

 

Topic: It Takes a Village: Governing AI for Children in the Digital Age

 

In this episode, we explore:

 

•Foresight as discipline: How UNICEF uses horizon scanning and scenario development to anticipate global trends and prepare for potential futures that impact children's lives.

 

•AI's promise and reality in developing countries: The gap between the democratization narrative and the actual challenges of internet access, infrastructure, cost, and biased data systems.

 

•Data governance in education: How EdTech companies collect and misuse children's data, with only 40% of personalized learning platforms in developing countries having data protection policies in place.

 

•Power dynamics in AI: The imbalance between tech companies in high-income countries and communities in the Global South, and the risk of techno-colonialism eroding local languages and cultural identity.

 

•Africa's demographic future: Why Africa's young population makes investment in local AI development, indigenous language models, and developer pipelines critical now.

 

•The ecosystem approach: How protecting children requires coordinated action from parents, teachers, tech companies, policy makers, and governments—because it takes a village to raise a child in the digital age.

 

•Parenting in the digital age: Practical guidance on balancing protection with autonomy, ensuring children develop digital literacy and resilience to navigate online risks.

 

•Building trust in technology: The importance of governance frameworks, data literacy, and age-appropriate design before scaling AI systems that children will use.

 

"It takes a village to raise a child. So when it comes to children and technology, children and AI, children and their data, we need to think about the role that various different people in their lives have to play from parents to teachers to company leaders to government policy makers—they all are actually responsible for children's lives." Jasmina Byrne

 

Episode length: 59 minutes

  

Connect: https://za.linkedin.com/in/kamini-govender-942225159

 

Show more...
1 week ago
1 hour 40 seconds

AI Ethics Navigator
Episode 7: Guest: Patrick “Paddy” Connolly | Global Responsible AI and Generative AI Research Manager | Fellow, World Economic Forum

Originally published: 11 November 2025


Guest: Patrick “Paddy” Connolly | Global Responsible AI and Generative AI Research Manager | Fellow, World Economic Forum


Paddy Connolly is a Dublin-based Responsible AI and Generative AI Research Manager and a Fellow with the World Economic Forum. An electronic engineer by training, he has built his research career around implementing Responsible AI, conversational AI ethics, generative AI implementation, algorithmic fairness, and Responsible AI maturity frameworks. He has authored and co-authored multiple studies on Responsible AI, including work published in MIT Sloan Management Review. His most recent research contribution, Responsible AI in the Global Context: Maturity Model and Survey, examined over 1,000 organizations across 20 industries and 19 regions to assess Responsible AI maturity.

 

Topic: Building Trust in AI Systems: A Strategic Imperative

 

In this episode, we explore:

 

• RAI 1.0 vs RAI 2.0: How Responsible AI must evolve from static, pre-deployment risk management to dynamic, system-level governance that addresses real-time, post-deployment risks.

 

• Agentic AI and trust: Why the rise of AI agents—capable of autonomous decision-making and interaction—requires new infrastructure for persistent trust, monitoring, and accountability.

 

• Governance in practice: What board-level accountability for AI means in light of frameworks such as South Africa’s new King V Code on corporate governance.

 

• Global maturity findings: Insights from research showing that most organizations remain at early stages of Responsible AI implementation, with less than 1% demonstrating advanced maturity.

 

• Trust as value: How trust is moving beyond compliance toward becoming a strategic enabler for scaling AI safely and effectively.

 

• Human factors: The importance of multidisciplinary collaboration, behavioral science, and stakeholder involvement in mitigating bias and improving design.

 

• Conversational AI ethics: The psychological and ethical challenges of increasingly human-like systems, and the risks of emotional manipulation and misplaced trust.

 

• Ethics, justice, and connection: A reflective discussion on moral understanding, digital empathy, and how humanity can preserve genuine connection in an AI-mediated world.

 

“You can’t rely on pre-deployment mitigation anymore. We need to build the infrastructure that allows you to know what an agent is doing, why it’s doing it, and how to fix it when it goes wrong.”


 "Thank you for inviting me to speak on your podcast. I had so much fun chatting with you, and it was great to speak with someone who cares so much about Responsible AI." Paddy Connolly

 

Episode length: 1 hour 7 minutes

 

Connect with Kamini: ⁠https://www.linkedin.com/in/kamini-govender⁠


Subscribe: ⁠https://www.youtube.com/@AIethicsnavigator⁠



Show more...
2 weeks ago
1 hour 7 minutes 20 seconds

AI Ethics Navigator
Episode 6: Guest: Dr Simon Longstaff | What Makes Us Human in the Age of AI

Originally published: 05 November 2025


Guest: Dr Simon Longstaff  | Philosopher | Officer of the Order of Australia | Adjunct Professor, UNSW Business School | Honorary Professor, Australian National University


Dr. Simon Longstaff is a philosopher trained at Cambridge with over 34 years of experience in applied ethics. He works with CEOs, boards, and government leaders on questions of ethics, human flourishing, and what it means to makedecisions that are good and right. His recent work explores AI's relationship to human nature and what distinctive aspects of being human must be preserved as artificial intelligence advances.


Topic: What Makes Us Human in the Age of AI


In this episode, we explore:


•Transcending animal nature: Why humans are distinctive not because we lack instincts and desires, but because we can choose to go beyond them—staying steadfast to promises even in danger, refusing food that isn't ours even when starving, putting abstract commitments above survival imperatives


•The analog-digital divide: How AI systems exist in a fundamentally different world than humans do, and what information or understanding might be lost when we try to capture human experience through digital systems—including insights embedded in indigenous knowledge systems that arise from direct engagement with the analog world


•Simulation versus authenticity: The philosophical difference between an AI that can perfectly replicate a consoling touch and a human who actually understands mortality; between an AI companion that performs empathy token-by-token and a friend who genuinely feels concern—and what we risk losing if we accept simulation as equivalent to the real thing


•Two versions of capitalism: How Adam Smith's original conception of free markets included ethical restraints, sympathy, and the requirement that markets increase common good—versus the rapacious, power-driven capitalism that Marx criticized and that we often see today—and why choosing the former isn't inevitable but is possible


•Who counts: How the major ethical question throughout history has been the expansion of who we recognize as having full personhood—from exclusions based on race, gender, and religion to current questions about sentient beings and even elements of the natural world in indigenous frameworks


"The thing that worries me most is that the societies in which we live are not preparing and certainly not being open in their preparations for the major transition that will take place. When societies are profoundly challenged, they can easily go wrong very quickly when people get angry and frustrated and scared." – Dr. Simon Longstaff


Episode length: 58 minutes


Connect with Kamini: https://www.linkedin.com/in/kamini-govender


Subscribe: https://www.youtube.com/@AIethicsnavigator


Show more...
3 weeks ago
58 minutes 55 seconds

AI Ethics Navigator
Episode 5: Dr. Andrés Domínguez Hernández | Systemic Power and Techno-Colonialism in Global AI

Originally published: 29 Oct 2025

Guest: Dr. Andrés Domínguez Hernández | Ethics Fellow, The Alan Turing Institute | Visiting Senior Lecturer, Queen Mary University of London

Dr. Andrés Domínguez Hernández is an Ethics Fellow at The Alan Turing Institute and Visiting Senior Lecturer at Queen Mary University of London's Digital Environment Research Institute. With a PhD in Science and Technology Studies and a background in engineering and innovation policy, he examines power, justice, and ethics in AI and data-driven innovation. Previously a Senior Research Associate at the University of Bristol and Director of Technology Transfer at Ecuador's Ministry of Science, Technology, and Innovation, Andrés brings Global South perspectives to questions of responsible innovation. He contributed to the Council of Europe's HUDERIA methodology for human rights impact assessment and recently presented on systemic AI governance challenges at UNESCO's Global Forum on Ethics of AI in Bangkok.

Topic: Systemic Power and Techno-Colonialism in Global AI

In this episode, we explore:

Systemic versus downstream concerns: Why current governance focuses on safety and bias at deployment while ignoring upstream issues like infrastructure control, supply chain exploitation, and industry concentration

Power concentration in practice: Infrastructure control as governance, corporate encroachment into public systems (Palantir and NHS), and why countries with smaller GDPs can't effectively regulate major tech companies

Global South as testing ground: How risky AI applications deploy where regulation is weakest, from Open AI's World Coin biometric collection to educational technology harvesting children's data

Epistemic dominance: Foundation models embedding Western epistemologies globally, creating homogenization where similar prompts yield similar outputs regardless of cultural context

Hype as material force: Self-updating prophecies that attract investment through claims about AGI, shaping resource allocation and governance priorities toward existential risks over present harms

Human rights framework: The Council of Europe's HUDERIA methodology for assessing AI across the technology lifecycle, from design through deployment and mechanisms for redress

Counter-power and world-making: Examples from the Global South including Masakhane's NLP work, Lelapa AI's small language models, and the importance of moving beyond critique to imagine alternative futures

"When we critique technology, it's not the technology itself that we are critiquing, but the way it is organized and the way it is extracting value to favour a handful of companies around the world."

Episode length: 1 hour 30 minutes


Connect with Kamini:

https://www.linkedin.com/in/kamini-govender


Show more...
3 weeks ago
1 hour 30 minutes 45 seconds

AI Ethics Navigator
Episode 4: Dr. Emma Schleiger | Mind the Gap: Strategic Foresight and Emerging Risks in Operationalizing Responsible AI

Originally published: 16 Oct 2025

Guest: Dr. Emma Schleiger | Head of AI Governance, Cadent | Lead Author, Australia's AI Ethics Principles

Dr. Emma Schleiger leads AI governance at Cadent, specializing in aligning strategy, risk, and standards for responsible AI development and adoption. With a PhD in Clinical Neuroscience, she brings expertise in the human impact of digital technologies and governance processes that ensure AI is safe, ethical, and compliant. As lead author of the discussion paper that informed Australia's AI Ethics Principles, Emma has shaped how organizations design, develop, and deploy AI responsibly across healthcare, transport, energy, and agriculture sectors. After seven years as a Research Scientist at CSIRO's Data61, she now works directly with clients translating ethical principles into actionable governance practices.

Topic: Mind the Gap: Strategic Foresight and Emerging Risks in Operationalizing Responsible AI

In this episode, we explore:

Alternative pathways into AI governance: Emma's journey from clinical neuroscience to leading Australia's AI Ethics Principles, and translating high-level principles into design choices and development patterns

Research to consulting: Demonstrating ROI and commercial value versus societal benefits, and meeting organizations where they actually are rather than where they claim to be

Shadow AI risks: How much IP and sensitive data employees put into open-source models, why "don't use it" policies fail, and emerging technical solutions that redact data before it leaves computers

Why AI initiatives fail: Organizations fitting AI onto problems that don't need it, rushing to solutions before identifying issues, and the gap between C-suite demands and workforce readiness

Literacy as foundation: Building basic AI understanding across populations by meeting people without judgment and showing them they already use AI daily

Governance as enabler: Demonstrating that governance enables better strategic decisions and prevents wasted investment, not just compliance

"The top culprits are trying to fit an AI solution onto a problem that isn't acquiring AI. It is wanting to use the latest, greatest, shiniest, coolest toys rather than like what is best..."

“It was such a pleasure to chat with Kamini Govender around all things AI Governance. It is always a great opportunity to be on the other side of the interview chair, especially with Kamini's warmth and curiosity.” 

Dr Emma Schleiger


Episode length: 1 hour


Connect with Kamini: https://www.linkedin.com/in/kamini-govender

Show more...
3 weeks ago
1 hour

AI Ethics Navigator
Episode 3: Michael L. Bąk | How to Integrate Cultural Context and Nuance, and Still Scale for Global Ethical Frameworks

Originally published: 5 Oct 2025

Guest: Michael L. Bąk | Policy and Digital Rights Professional | Board Member | Former Tech, Facebook, UN & USAID

Michael L. Bąk is a policy and digital rights professional with over 25 years of international experience working at the intersection of technology, democracy, and human rights. He has served as a diplomat representing USAID, the United Nations, and led public policy for Facebook in Thailand and regional institutions.

Michael is Co-Founder and Director of Sprint Public Interest, Global Advisor for Ethical AI Alliance, and author of the (margin*notes)^squared newsletter. His work focuses on building equitable frameworks for AI governance that center voices from the global majority.

Topic: How to integrate cultural context and nuance, and still scale for global ethical frameworks

In this episode, we explore:

Sovereign knowledge ecosystems: Why the Global South must steward its own research and policy development rather than translating itself for the North—and how philanthropic funding and academic networks can support knowledge generation that influences global discourse

The pro-social AI framework: Moving beyond the US profit-first versus EU human rights dichotomy to embrace pro-profit, pro-people, pro-planet, and pro-potential approaches developed by academic Cornelia Walther at Sunway University Malaysia

Recognition as algorithmic sorting: How lists like TIME's 100 Most Influential in AI (61% American, 75% from the global north) act like algorithms that determine who gets invited to shape the conversation—and what gets left out

The uncomfortable middle: Why the most powerful knowledge emerges when we build bridges from both sides and meet in the space where the ground feels less solid—where diverse voices, experiences, and wisdom create breakthrough insights

"You cannot use the master's tools to tear down the master's house." - Audre Lorde


“I really enjoyed recording this podcast -- 𝐀𝐈 𝐄𝐭𝐡𝐢𝐜𝐬 𝐍𝐚𝐯𝐢𝐠𝐚𝐭𝐨𝐫 -with Kamini Govender - she in South Africa and me in Thailand. We both explore AI governance outside the lanes created by Northern tech companies, governments and multilaterals to envision a new way of governing the kinds of technology we want in our lives. Always very happy to swap stories and share insights with others passionate about guiding technology that serves societies and citizens first and foremost.” 

Michael L. Bąk

 

Episode length: 1 hour 11 minutes


Connect with Kamini: https://www.linkedin.com/in/kamini-govender


Show more...
3 weeks ago
1 hour 11 minutes 39 seconds

AI Ethics Navigator
Episode 2: Dr. Ravit Dotan | A User's Guide on How to Start with the End in Mind

Originally published: 8 Oct 2025

Guest: Dr Ravit Dotan | AI Ethicist | Speaker | Researcher

Dr Ravit Dotan is a philosopher and AI ethicist named among the "100 Brilliant Women in AI Ethics" (2023) and a "Responsible AI Leader of the Year" (2025) finalist. Her work has been featured in The New York Times and CNBC, and honored with a distinguished Paper award from FAccT. Dr Ravit Dotan is the founder and CEO of TechBetter, an organization that helps people and organizations use AI ethically to do meaningful work.

Topic: A user's guide on how to start with the end in mind

In this episode, we explore:

The "takeout approach" vs. the "chef approach": Dr Dotan critiques using AI to produce final outputs (emails, outlines, summaries) and instead advocates for using AI to create processes you go through yourself—where you remain the one thinking and deciding when work is complete

"Think first, prompt later": Start with what you're actually trying to achieve in your work, not with the AI tool—identify how and where technology might fit into your process rather than beginning with the technology

Why cognitive decline matters: The current approach of offloading mental work leads to loss of expertise and replacement anxiety—using AI for your core work (where your expertise lies) rather than peripheral tasks changes everything

Value-aligned system prompts: The practical technique of designing AI processes with your values and ethical guidelines built in from the start—making ethics inseparable from AI adoption rather than a separate compliance exercise

What you'll understand after listening: A concrete framework for using AI that deepens rather than replaces your thinking—and why the hype around AI agents is finally giving way to more thoughtful adoption.


“I had the pleasure of interviewing for Kamini Govender’s podcast, the AI Ethics Navigator. Kamini is one of the best podcast interviewers I’ve worked with, seriously. She has such great questions!”  Dr Ravit Dotan


Episode length: 1 hour 3 minutes

Connect with Kamini: https://www.linkedin.com/in/kamini-govender 

Show more...
3 weeks ago
1 hour 3 minutes 33 seconds

AI Ethics Navigator
Episode 1: Dr. Emma Ruttkamp-Bloem | From UNESCO to UN – Shaping Global AI Ethics Policy

Originally published: 1 Oct 2025

Guest: Dr Emma Ruttkamp-Bloem | AI Ethics Researcher, Professor and Head of Department of Philosophy at University of Pretoria, Chair of UNESCO's World Commission on the Ethics of Scientific Knowledge and Technology, Former Member UN AI Advisory Body

Topic: From UNESCO to UN – Shaping Global AI Ethics Policy

Getting 193 countries to agree on anything is nearly impossible.

Prof Emma did it for AI ethics. She chaired the UNESCO Ad Hoc Expert Group that drafted the UNESCO Recommendation on the Ethics of AI—the first global normative instrument on AI ethics—adopted by all 193 Member States in 2021.

In this episode, we explore:

The reality of international AI governance: what it takes to build consensus across 193 countries with different values, interests, and stages of technological readiness—and the critical role of epistemic justice in recognizing every country as a credible contributor

Why ethics isn't just about principles: Prof Emma explains ethics as a dynamic reasoning system, not a checklist, and why lists of AI principles are "completely useless" without translation into action and procedural regulation

The implementation gap: why getting countries to sign on is just the beginning, and what's needed to bridge the distance between international agreements and real-world impact—including her view that we need both top-down hard legislation with serious financial consequences and bottom-up community-driven approaches

Her urgent warning about pervasiveness and manipulation:

"Protect your right to think for yourself… out-think the business model."

Prof Emma discusses why AI's embeddedness in daily life is one of the biggest threats we face, how it affects our ability to determine what facts are, and why mental integrity and authentic decision-making are at risk

What you'll understand after listening: How international AI policy actually gets made—not the idealized version, but the real negotiations, trade-offs, and ongoing work of translating agreement into action.

Episode length: 1 hour 12 minutes

Connect with Kamini: https://www.linkedin.com/in/kamini-govender


Show more...
3 weeks ago
1 hour 11 minutes 56 seconds

AI Ethics Navigator
AI Ethics Navigator examines how emerging technologies are reshaping society, power, and everyday life. Hosted by Kamini Govender, each episode features researchers, advisors, and practitioners — including voices from UNESCO, the UN AI Advisory Body, and the Alan Turing Institute — exploring what ethics looks like in practice when technology meets society. Thoughtful, unhurried conversations linking ideas across disciplines and geographies, with Global South perspectives often missing from mainstream discourse. Connect with Kamini: https://www.linkedin.com/in/kamini-govender