Artificial Intelligence is reshaping economies, governments, and global cooperation.
At the ASEAN platform, Sanjay Puri, Founder & Chairperson, sits down with U.S. Congressman Jay Obernolte to discuss the evolving landscape of AI governance, AI policy, and international collaboration.
This insightful conversation explores:
📌 Watch the full discussion to understand how policymakers and industry leaders are working together to shape the future of AI.
In this episode of the Regulating AI Podcast, we speak with Camille Carlton, Director of Policy at the Center for Humane Technology, a leading voice in AI regulation, chatbot safety, and public-interest technology.
Camille is directly involved in landmark lawsuits against CharacterAI and OpenAI CEO Sam Altman, placing her at the forefront of debates around AI accountability, AI companions, and platform liability.
This conversation examines the mental-health risks of AI chatbots, the rise of AI companions, and why certain conversational systems may pose public-health concerns, especially for younger and socially isolated users. Camille also breaks down how AI governance frameworks differ across U.S. states, Congress, and the EU AI Act, and outlines what practical, enforceable AI policy could look like in the years ahead.
Key Takeaways
AI Chatbots as a Public-Health Risk
Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent mental-health and safety concerns.
Regulating Chatbots vs. Foundation Models
Why high-risk conversational AI systems require different regulatory treatment than general-purpose LLMs and foundation models.
Global AI Governance Lessons
What the EU AI Act, U.S. states, and Congress can learn from each other when designing balanced, risk-based AI regulation.
Transparency, Design & Accountability
How a light-touch but firm AI policy approach can improve transparency, platform accountability, and data access without slowing innovation.
Why AI Personhood Is a Dangerous Idea
How framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement.
Subscribe to Regulating AI for expert conversations on AI governance, responsible AI, technology policy, and the future of regulation.
#RegulatingAIpodcast #camillecarlton #AIGovernance #ChatbotSafety #Knowledgenetworks
#AICompanions
Resources Mentioned:
https://www.linkedin.com/in/camille-carlton
https://www.humanetech.com/substack
https://www.humanetech.com/podcast
https://www.humanetech.com/landing/the-ai-dilemma
https://centerforhumanetechnology.substack.com/p/ai-product-liability
https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai
In this episode of RegulatingAI, host Sanjay Puri speaks with Karin Andrea-Stephan — COO & Co-founder of Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being.
With a career that spans music, psychology, and digital innovation, Karin shares how she’s building privacy-first AI tools designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress.
Together, they unpack the delicate balance between AI innovation and human empathy, the ethics of AI chatbots for youth, and what it really takes to design technology that heals instead of harms.
Key Takeaways:
• AI and Empathy: Why emotional intelligence—not algorithms—must guide the future of mental health tech.
• Teens and Trust: How technology exploits belonging, and what must change to rebuild digital trust.
• Regulating Responsibly: Why the answer isn’t bans, but thoughtful, transparent policy shaped with youth input.
• Privacy by Design: How ethical AI can protect privacy without compromising impact.
• Bridging the Global Mental Health Gap: Why collaboration and compassion matter as much as code.
If this conversation made you rethink the relationship between AI and mental health, hit like, share, and subscribe to RegulatingAI for more insights on building technology that serves humanity.
Resources Mentioned:
In this episode of RegulatingAI, host Sanjay Puri sits down with Jeff McMillan, Head of Firmwide Artificial Intelligence at Morgan Stanley. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the world’s most regulated industries, Jeff shares how large enterprises can harness generative AI responsibly striking the right balance between innovation, governance, and ethics.
Key Takeaways:
If you enjoyed this conversation, don’t forget to like, share, and subscribe to RegulatingAI for more insights from global leaders shaping the future of responsible AI.
#RegulatingAI #SanjayPuri #MorganStanley #JeffMcmillan #AIGovernance #AILeadership #EnterpriseAI
Resources Mentioned:
https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/
Recent Podcast
Morgan Stanley External Facing Website sharing some of the work we are doing on AI
https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team
In this episode of the RegulatingAI Podcast, we host California State Senator Scott Wiener, one of the most influential policymakers shaping the future of AI regulation, AI safety, and transparency standards in the United States.
As President Donald Trump’s new AI executive order pushes for federal control over AI regulation, Senator Wiener explains why states like California must retain the power to regulate artificial intelligence — and how California’s laws could influence global AI governance.
Senator Wiener is the author of:
• SB 1047 – California’s proposed liability bill for high-risk AI systems
• SB 53 – California’s new AI transparency law, now in effect
We dive deep into:
• The battle between federal vs. state AI regulation
• Why California remains the frontline of AI governance
• The real impact of Trump’s AI executive order
• Growing risks of AI-driven job displacement
• How governments can balance innovation with public safety
• The future of responsible and accountable AI development
🔑 KEY TAKEAWAYS
1. California’s Policy Power
California’s tech dominance allows it to shape national and global AI standards even when Congress stalls.
2. SB 1047 vs. SB 53 Explained
SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly disclose safety and risk practices.
3. Why Transparency Won
After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53.
4. AI Job Disruption Is Accelerating
Senator Wiener warns that workforce displacement from AI is happening faster than expected.
5. A Realistic Middle Path
He advocates for smart AI guardrails — avoiding both overregulation and total deregulation.
If you found this conversation valuable, don’t forget to like, subscribe, and share to stay updated on global conversations shaping the future of AI governance.
Resources Mentioned:
https://www.linkedin.com/company/ascet-center-of-excellence
https://www.linkedin.com/in/james-h-dickerson-phd
In this episode of RegulatingAI, host Sanjay Puri sits down with Congresswoman Sarah McBride of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can lead responsibly in the global AI race.
From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her human-centered vision for how AI can advance democracy, fairness, and opportunity for everyone.
Here are 5 key takeaways from the conversation:
💡 Finding the “Goldilocks” Zone: How to strike that just-right balance where AI regulation protects people without holding back innovation.
🏛️ Federal vs. State Regulation: Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility.
👩💻 AI and the Workforce: What policymakers can do to make sure AI augments human talent rather than replacing it.
🌎 Democracy vs. Authoritarianism: The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy.
🔔 Delaware’s Legacy of Innovation: How Delaware’s collaborative approach to growth can be a model for responsible tech leadership.
If you enjoyed this episode, don’t forget to like, comment, share, and subscribe to RegulatingAI for more conversations with global policymakers shaping the future of artificial intelligence.
Resources Mentioned:
mcbride.house.gov
Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.
In this episode, I sit down with Armenia's Minister of Finance to discuss:
~ Why Nvidia is building a massive AI factory in Armenia
~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies
~ The secret advantage: abundant energy + Soviet-era engineering talent
~ Is the AI investment boom a bubble or the real deal?
~ How AI is already being used in tax collection and government services
~ The peace agreement with Azerbaijan and what it means for tech investment
~ Why the "Middle Corridor" could make Armenia the next tech destination
The Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.
About the Guest:
Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.
🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation
💬 Leave a comment: What surprised you most about Armenia's AI strategy?
🔔 Hit the bell to catch our next episode
In this episode of RegulatingAI, host Sanjay Puri welcomes Dr. Mark Robinson — Senior Science Diplomacy Advisor, Oxford Martin AI Governance Initiative, University of Oxford. Drawing on decades of experience leading projects like ITER and the European Southern Observatory, Dr. Robinson shares his bold vision: establishing an international AI agency under the United Nations. Together, we explore the urgent need for global AI governance, parallels with past scientific collaborations, and the challenges of balancing innovation, safety, and sovereignty.
5 Key Takeaways
If you found this conversation insightful, don’t forget to like, comment, and share — and subscribe to RegulatingAI for more global perspectives on building a trustworthy AI future.
Resources Mentioned:
🎙 While global AI conversations are dominated by the US, China, and Europe, Africa is crafting its own path. Dr. Nick Bradshaw, Founder of the South African AI Association, joins us to discuss how the continent can build sovereign AI systems, retain talent, and shape regulation rooted in local realities.
From data sovereignty to the “brain drain” challenge, we explore what responsible AI looks like for Africa—and how regulation can drive innovation, not restrict it.
Resources Mentioned:
https://www.linkedin.com/in/nickbradshaw/
In this episode of the RegulatingAI Podcast, host Sanjay Puri had an engaging podcast with Governor Matt Meyer, Delaware’s 76th Governor and a national leader in AI governance. Governor Meyer shares how Delaware is pioneering responsible AI through initiatives like the AI sandbox, the OpenAI workforce certification partnership, and efforts to safeguard democracy from deepfakes. This masterclass in state-led AI regulation explores how innovation and accountability can—and must—go hand in hand.
5 Key Takeaways:
If you found this conversation insightful, don’t forget to like, comment, share, and subscribe to the RegulatingAI Podcast for more expert perspectives on the future of AI.
Resources Mentioned:
https://www.linkedin.com/company/governor-delaware-matt-meyer/
https://governor.delaware.gov/
In this episode of RegulatingAI, Sanjay Puri speaks with Nebraska Attorney General Mike Hilgers, who is leading efforts to combat AI-enabled child exploitation.
You’ll learn:
Hilgers also shares his perspective on the U.S.–China AI race and why legal frameworks must adapt to fast-moving technologies.
Resources Mentioned:
https://www.linkedin.com/company/nebraska-department-of-justice
In this episode of the RegulatingAI Podcast, Sanjay Puri speaks with Rui Pedro Duarte, Managing Director at Loop Future Switzerland and author of The Age of AI Diplomacy. A former Member of Parliament in Portugal, Rui shares a unique perspective on how political experience and technology collide in shaping AI governance.
Key discussion points:
Watch now for a deep exploration of AI’s role in diplomacy and the urgent need for systemic, global cooperation.
Resources Mentioned:
In this episode of the RegulatingAI Podcast, Sanjay Puri speaks with Brando Benifei, Member of the European Parliament, and one of the lead architects of the EU AI Act—the world’s first binding legislation on artificial intelligence.
Brando shares deep insights into the challenges of implementation, balancing transparency with intellectual property, and safeguarding freedoms in a rapidly evolving AI landscape.
Key highlights include:
🔗 Watch now to understand how Europe is shaping AI regulation—and what it means for the world.
Resources Mentioned:
https://en.wikipedia.org/wiki/Brando_Benifei
https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home
RegulatingAI Podcast: How the UN’s ITU Is Shaping Global AI Standards | Tomas Lamanauskas
In this compelling episode, host Sanjay Puri sits down with Tomas Lamanauskas, Deputy Secretary-General of the International Telecommunication Union (ITU), to explore the global architecture of AI governance.
🔍 What you’ll learn:
🌍 A must-watch for:
Subscribe for future episodes diving deep into global AI governance.
Resources Mentioned:
https://www.linkedin.com/in/tlamanauskas/
https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx
⏱️ Timestamps:
0:00 Podcast Highlights & Introduction
2:00 What is the ITU and its role in AI regulation?
2:45 From telegraph to AI: A history of the ITU
8:42 Standardizing AI in a rapidly moving world
14:03 The ITU's role in enforcing standards
18:51 Three approaches to AI governance: EU, US, and China
25:01 Geopolitics and national security in AI
30:24 The importance of undersea cables
34:41 Ensuring AI benefits everyone and bridging the digital divide
43:21 The AI for Good Global Summit
48:28 Conclusion and farewell
Live from the AI4 Conference in Las Vegas, Andrew Reiskind, Chief Data Officer at Mastercard, joins the Regulating AI Podcast to discuss the critical intersection of data, AI, and trust. From AI-powered fraud detection to personalization, responsible AI governance, and the rise of agentic commerce, Andrew shares how Mastercard is navigating global challenges in data sovereignty while keeping safety and security at the core.
Topics Covered:
Subscribe for more insights from AI leaders shaping the future.
Resources Mentioned:
In this episode of the RegulatingAI podcast, host Sanjay Puri speaks with Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute. Together, they explore how algorithms intended to support justice can actually perpetuate discrimination.
Key topics include:
A sobering and essential conversation about AI, justice, and what ethical governance looks like in practice.
Resources Mentioned:
https://www.linkedin.com/in/esantow/
⏱️ Timestamps:
0:00 Podcast Highlights
1:34 Ed’s background and journey into technology governance
2:12 The 'aha' moment: an algorithm targeting young people based on race
5:36 Finding a balance between AI's dystopian problems and positive use cases
9:07 The global fear of missing out (FOMO) and the trade-off with fundamental rights
11:12 Why innovation and regulation are not a trade-off
12:22 Comparing the AI regulatory approaches of the EU, US, and China
13:57 Australia's practical, non-ideological approach to AI
15:45 How Australia is building its niche on liberal democratic values
19:22 The shift from "fluffy principles" to practical AI safety standards
22:37 The three most common issues for corporate leaders in AI governance
23:08 The problem with the "AI guru" model of governance
25:08 The "dirty secret" of AI and the importance of engaging workers
35:24 The impact of AI on jobs and the workplace
40:28 The Asia-Pacific region's role in AI governance
44:07 Preserving indigenous cultures and languages in AI training data
47:14 The concentration of power in a handful of AI companies
50:09 Facial recognition: good uses vs. bad uses
53:57 Lightning round of questions
55:22 Conclusion and farewell
Join host Sanjay Puri in conversation with Dr. Cari Miller, a leading voice in AI governance, as they unpack the recently announced America’s AI Action Plan.
🔍 What you'll learn:
🌍 Global policymakers, this one is for you.
🎯 Watch now to understand why the latest U.S. move could raise alarms worldwide.
Resources Mentioned:
https://www.linkedin.com/in/cari-miller/
⏱️ Timestamps:
0:00 Introduction of Dr. Cari Miller
2:52 The three pillars of America's AI Action Plan
7:20 Comparing the AI Action Plan to the EU AI Act
8:27 "Hurry up and innovate" and the geopolitical dimension of AI
10:45 The dilemma between innovation and regulation
13:09 The moratorium on state-level AI regulation
15:50 A spectrum for regulation: reversible vs. irreversible harm
17:17 The EU's approach to regulation
19:10 Why AI procurement is the "gate of all gates" for governance
21:27 What makes AI procurement different
23:32 The need for augmented procurement practices and training
24:14 Accounting for hallucination and vendor disclaimers
27:55 Procurement for foundation models vs. fine-tuned solutions
29:39 The possibility of AI insurance
31:02 Distinguishing between trustworthy and "AI snake oil" vendors
33:23 Strengths and weaknesses of existing AI procurement frameworks 35:26 The three checkpoints before issuing an AI RFP
37:41 Sovereign AI and procurement for global south nations
40:20 Concerns about agents and agentic AI systems
44:02 The domain professional and complex multi-turn tasks
45:59 Procurement and pricing models for AI agents
49:00 The maturity of agents and the role of CISOs
52:35 Liability and governance for autonomous agents
55:33 The use of synthetic data: benefits and risks
58:50 Lightning round of questions
1:01:53 Concluding remarks
The RegulatingAI Podcast welcomes Prof. Raquel Brízida Castro to examine how Europe's AI regulatory framework measures up against core constitutional protections.
📌 Topics Covered:
~ The EU AI Act’s categorisation of risk – does it go far enough?
~ The collision between data sovereignty, latency, and user rights
~ Why current legal remedies like GDPR aren't enough for generative AI
~ Does the Brussels effect stand a chance against the Washington effect?
~ Will national courts lose relevance in the age of EU digital regulation?
~ Raquel's legal insight warns of a quiet constitutional revolution underway and why citizen protection must evolve urgently.
🎧 Watch Now: This conversation is vital for anyone navigating AI governance in democratic societies.
Resources Mentioned:
https://www.linkedin.com/in/raquel-a-br%C3%ADzida-castro-15317a105/
⏱️ Timestamps:
0:00 Introduction to the podcast and guest, Raquel Brízida Castro
2:21 Magnificent Introduction
2:58 The EU AI Act from a Constitutional Law Perspective
3:20 Constitutional Challenges and the Digital Social Democratic Rule of Law
5:59 New Fundamental Rights in the AI Age
8:27 The Right to Explainability: Rule of Law vs. Rule of Algorithm
11:34 Is the EU AI Act's Risk-Based Approach Adequate?
12:05 The Impact of AI on Fundamental Rights
14:52 Regulation vs. Bureaucracy and Self-Regulation
16:26 The Implementation of the AI Act and its Challenges
21:58 The EU vs. US Approach: Regulation vs. Innovation
23:55 The False Dilemma Between Regulating and Innovation
27:09 The Washington Effect
30:51 Implications for American Companies in Europe
31:49 Digital Sovereignty and the Problem of Latency
35:28 Constitutional Safeguards and Regulatory Overreach
35:40 The Primacy of European Law and the Role of Constitutional Courts
38:58 The Two-Year Moratorium on the EU Act
40:30 Lightning Round of Questions
43:24 Final thoughts
🚨 BREAKING: Former Deputy White House Counsel's Latest Interview on Trump's AI Strategy
In this episode of the RegulatingAI Podcast, we sit down with Joshua Geltzer, who advised President Biden, to discuss the details behind America's new AI Action Plan. This is the definitive breakdown every tech executive, investor, and policymaker need to watch.
🎯 CRITICAL TAKEAWAYS:
About the Guest: Joshua Geltzer is a partner at Wilmer Hale focusing on AI, cybersecurity, and national security litigation. Until January 2025, he served as Deputy Assistant to the President, Deputy White House Counsel, and Legal Adviser to the National Security Council.
Resources Mentioned:
https://www.linkedin.com/in/joshua-geltzer-6209b3198/
https://www.wilmerhale.com/en/people/joshua-geltzer
⏱️ Timestamps:
0:00 Introduction to the podcast and guest Joshua Geltzer
4:29 Welcome to Regulating AI: The Podcast
5:44 The Three Pillars of the AI Action Plan
6:37 Fair Use, Training Data, and the Courts
8:17 Power, Land, and Permitting for Data Centers
10:19 Countering Synthetic Media and Deepfakes
11:45 The Effectiveness and Limitations of Export Controls
13:39 Leading International AI Governance While Prioritizing National Dominance
15:28 Federal-State Dynamics in AI Governance
19:00 The Open Source vs. Closed Model Debate
20:45 The Competitive Framing with China and National Security
22:54 Global AI Regulation and the Future
23:41 Concluding the discussion
In this episode of RegulatingAI, Sanjay speaks with Rob T. Lee, Chief AI Officer at the SANS Institute and advisor to the U.S. Foreign Intelligence Surveillance Court.
What you’ll learn:
This conversation is a wake-up call for regulators, enterprise leaders, and anyone navigating AI implementation at scale.
Resources Mentioned:
Rob T. Lee, Chief of Research and Chief AI Officer, SANS Institute
https://www.linkedin.com/in/leerob/
Substack: https://robtlee73.substack.com/
YouTube: https://www.youtube.com/@RobLee96