Subscribe to the Just AI newsletter for links to all of the articles discussed today:
https://thepaigelord.substack.com/
🎙️ Just AI Podcast
Where justice meets artificial intelligence.
🔹 Host: Paige Lord
🔹 This week’s topics:
Stay informed on the latest in AI, ethics, and policy—because the future of AI is being shaped right now. 🚀
Episode 8: AI, Free Speech, and Global Tensions
After a short break, we’re back this week with major AI updates that are shaking up policy, global research, and security.
🔍 US State Department’s “Catch and Revoke” AI Initiative – AI-powered surveillance is scanning social media for pro-Palestinian and anti-war sentiments, potentially revoking visas. What does this mean for free speech and the future of digital rights?
🌏 China’s Strategic AI Talent Move – A leading AI researcher, Tingwen Huang, has returned to China after two decades abroad. What does this signal about China’s growing AI ambitions and its global AI strategy?
📜 Anthropic’s Policy Push – The AI company has submitted key recommendations to the U.S. government, calling for stronger semiconductor export controls and security measures. How will this influence the AI policy landscape under the Trump administration?
🍏 Apple’s AI Struggles – Delayed Siri updates, AI-generated voice-to-text failures, and setbacks in their AI rollout—does Apple have a clear AI vision, or are they falling behind?
🚨 AI & Child Exploitation Arrests – A global crackdown led to 25 arrests related to AI-generated child abuse material. With laws struggling to keep up, what can be done to close the legal gaps?
Tune in for this week’s Just AI episode—where we break down the biggest AI developments, why they matter, and what they mean for the future.
Listen now! 🎙️
This Week on Just AI – We’re making some changes! Episodes will now be quicker—25 minutes or less—focusing on the biggest AI, responsible AI, and AI policy stories of the week. We’re also shifting to an audio-only format, still available on Spotify, Apple, and YouTube.
In this episode:
🔗 Subscribe to the Just AI newsletter on Substack for a deeper dive into responsible AI and policy news!
🎧 If you find the podcast valuable, please like, follow, and leave a review—it helps more than you know!
Links:
Newsletter (contains links to all stories discussed):
https://thepaigelord.substack.com/p/grok-ai-surveillance-and-majorana
Cipher Talk deep dive on Majorana: https://ciphertalk.substack.com/p/microsofts-quantum-leap
Black History Month: https://www.purpose.jobs/blog/celebrating-black-history-black-leaders-in-tech
Website: https://www.justaimedia.ai
🚀 What Happens in Paris...Impacts the World 🌍
The Paris AI Action Summit just wrapped up, but its ripple effects are only beginning. In this episode, we break down the key takeaways, the headlines that stole the show (yes, we’re talking about that Elon Musk-OpenAI drama), and what this all means for global AI policy.
Plus, we explore:
🔹 Black History Month Spotlight: The incredible story of Clarence Ellis, the first Black person to earn a Ph.D. in computer science.
🔹 AI Power Moves: Meta's humanoid robot push, Musk’s Grok 3, and why the NYT is embracing AI (while a major law firm is pumping the brakes).
🔹 South Korea’s AI Strategy: Big moves in computing power and chatbot restrictions.
From billion-dollar deals to the future of responsible AI, we’re breaking it all down—fast, sharp, and straight to the point.
The AI safety debate is heating up—Google has quietly lifted its ban on AI-powered weapons, and world leaders are gathering in Paris for the AI Action Summit. What does this mean for the future of responsible AI? In this episode, we break down Google's shifting AI principles, the latest EU AI Act updates, and how DeepMind’s AI is now competing with top human mathematicians. Plus, a closer look at Big Tech’s record AI investments, the latest copyright rulings on AI-generated content, and the growing divide between AI safety and AI acceleration. Tune in for all this and more on Just AI.
Links:
Newsletter:https://thepaigelord.substack.com/p/ai-safety-tug-of-war
YouTube:https://www.youtube.com/@justaiwithpaige
Open AI Data Residency in Europe:https://openai.com/index/introducing-data-residency-in-europe/
EU AI Act:
Article 4:https://www.euaiact.com/article/50
Recital 20:https://www.euaiact.com/recital/20
Luiza Jarovsky post:https://www.linkedin.com/posts/luizajarovsky_ai-ailiteracy-aigovernance-activity-7294322953645613056-uWPA?utm_source=share&utm_medium=member_desktop
In this episode of Just AI, we take a deep dive into one of the most disruptive AI developments of the year: DeepSeek R1. How did this Chinese AI model shake the US market, wiping out nearly $593 billion in Nvidia’s stock value in a day? What makes DeepSeek’s technology so formidable, and why are governments around the world scrambling to investigate or ban it? We break down the responsible AI concerns, the global response, and why this marks a pivotal moment in AI competition.
Next, in AI Policy News, Texas is making waves with HB 1709, a bold AI regulatory bill modeled after the EU AI Act. We unpack what this could mean for AI companies, civil liberties, and Texas’ $500 billion Stargate Project. Plus, we discuss a groundbreaking Ohio court ruling that threw out facial recognition evidence in a murder trial—what does this mean for AI in law enforcement?
Finally, in Research Corner, we explore Jevons’ Paradox in AI—why making AI more efficient might actually increase its environmental harm rather than reduce it. What does this mean for sustainability in AI, and how do business incentives and market forces shape its true environmental footprint?
Tune in to stay ahead of the latest in responsible AI, AI policy, and emerging AI research.
Most links are in the newsletter, but those that aren't are listed below.
Newsletter: https://thepaigelord.substack.com/p/all-eyes-on-deepseek
YouTube: https://www.youtube.com/@justaiwithpaige
EU AI Act: https://artificialintelligenceact.eu/chapter/1/
Responsible AI Leaders:https://finance.yahoo.com/news/lloyds-banking-group-lyg-appoints-083407936.html
Vatican Policy: https://www.pymnts.com/artificial-intelligence-2/2025/ai-regulations-texas-sweeping-ai-bill-and-the-vaticans-policy/#:~:text=Even%20The%20Vatican%20Has%20an%20AI%20Policy
Illinois Supreme Court Policy: https://natlawreview.com/article/illinois-supreme-court-announces-policy-artificial-intelligence
In this episode of the Just AI podcast, Paige interviews Alejandra Parra-Orlandoni, who shares her unique journey through education and career, highlighting her experiences at the Naval Academy, MIT, and Harvard Law. Alejandra (or MAPO, as she is commonly known) discusses her transition into the field of responsible tech and ethical innovation, emphasizing the importance of leadership and collaboration. She explains her role as VP of Ethical Innovation, detailing how it involves balancing innovation with risk mitigation. Currently, as COO of Pasture Labs, she focuses on improving simulation processes in industrial R&D. The conversation also addresses common misconceptions about responsible AI, particularly the belief that it is merely about compliance with laws. In this conversation, the speakers delve into the complexities of responsible innovation and the ethical implications of technology, particularly artificial intelligence (AI). They discuss common misconceptions about responsible innovation, the philosophical questions surrounding technology's purpose, and the challenges of navigating AI ethics and regulations. The conversation also touches on the future of AI regulation in the US, strategies for meaningful change in AI practices, and advice for those looking to enter the field of responsible AI.
Here's MAPO's info, if you want to get in touch with her:
LinkedIn: https://www.linkedin.com/in/callmemapo/
Substack: https://callmemapo.substack.com/
Instagram: @call.me.mapo (for dog photos)
Website: spirare.tech
In this episode of the Just AI podcast, host Paige Lord discusses the intersection of AI policy, censorship, and the Stargate AI infrastructure project. She dives into notable news updates, including the EU AI Act, copyright issues, and the implications of AI self-replication. The conversation also addresses social media interference, censorship related to reproductive rights, and the significant $500 billion Stargate project announced by President Trump, highlighting the evolving landscape of AI governance and its societal impacts. She raises concerns about the environmental impact of expanding AI data centers, especially in light of the U.S. withdrawal from the Paris Climate Agreement. The conversation shifts to the latest AI benchmark test, 'Humanity's Last Exam,' and the implications of AI policy changes under the Trump administration. Paige also explores China's advancements in AI, particularly the DeepSeek model, and concludes with a discussion on the potential of AI in education, particularly in Pakistan, and the importance of responsible AI strategies for businesses.
Links:
Just AI Newsletter (where you can find most of the stories I discuss): https://thepaigelord.substack.com/p/ai-policy-censorship-and-stargate
EU AI Act Data Template: https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/eu-model-contractual-ai-clauses-pilot-procurements-ai
Copyright/Paul McCartney: https://www.theguardian.com/technology/2025/jan/25/paul-mccartney-says-change-in-law-over-ai-could-rip-off-artists
AI can replicate itself:https://www.livescience.com/technology/artificial-intelligence/ai-can-now-replicate-itself-a-milestone-that-has-experts-terrified
In this episode, we explore the big stories shaping the world of AI and technology this week:
MLK Day and Inauguration Day Reflections: A moment of unity inspired by Martin Luther King Jr.'s timeless words on mutuality and destiny. World Economic Forum’s Future of Jobs Report: The WEF highlights AI's transformative impact on the workforce, with 40% of today’s skills set to become outdated by 2030. AI Missteps and Scams: From AI-generated transcript errors at Park Aerospace to an $850,000 Brad Pitt AI scam, we discuss the risks and responsibilities in today’s AI landscape. Responsible AI Milestones: Anthropic achieves ISO 42001 certification, setting a new standard for ethical AI practices. The Latest in AI Policy: OpenAI’s economic blueprint for U.S. AI leadership and Trump’s second-term AI initiatives take center stage in shaping America’s future.Join us as we break down these complex topics and their real-world implications. Don’t forget to follow, share, and subscribe to stay informed about AI’s ever-changing landscape.
Sources:
You can find source articles for most of the items discussed in the Just AI newsletter:
Stories not in the newsletter:
WEF Report: https://www.weforum.org/publications/the-future-of-jobs-report-2025/
Mira Murati: https://www.wired.com/story/mira-murati-startup-hire-staff/
Welcome to the inaugural episode of Just AI! In this week's update, we go over rumors of AI's involvement in the LA wildfires, Meta's wilding, Microsoft's new lawsuit taking on foreign cybercriminals, the UK's big bet on AI - and more!
For links to all of the topics we talk about, check out the Just AI Newsletter. (All links are embedded!) https://thepaigelord.substack.com/
YouTube video on Meta's AI personas: https://youtu.be/P7V20zqwoD4?si=ufK2Qe9O71mzY9eK
If you're interested in working with organizations addressing AI's impact on climate change:
Climate Change AI https://www.climatechange.ai/
Partnership on AI https://partnershiponai.org/