Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
All content for Preparing for AI: The AI Podcast for Everybody is the property of Matt Cartwright & Jimmy Rhodes and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
SORA SLOP, CLAUDE ON THE RISE & THE LLM WALL: Jimmy & Matt debate their favourite AI stories from Sept/Oct 2025
Preparing for AI: The AI Podcast for Everybody
1 hour 5 minutes
1 month ago
SORA SLOP, CLAUDE ON THE RISE & THE LLM WALL: Jimmy & Matt debate their favourite AI stories from Sept/Oct 2025
Send us a text Referal link for Abacus.ai's Chat LLM: https://chatllm.abacus.ai/yWSjVGZjJT What if video you see tomorrow is indistinguishable from reality—and untraceable to its source? We dive straight into Sora 2’s jaw-dropping leap in video generation, why watermarks won’t save trust online, and how newsrooms and regular viewers will need new verification habits to avoid being fooled or, just as dangerously, dismissing inconvenient truths as “AI.” Oh and also it's basically, probab...
Preparing for AI: The AI Podcast for Everybody
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...