Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
All content for Preparing for AI: The AI Podcast for Everybody is the property of Matt Cartwright & Jimmy Rhodes and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
AI INFLECTION POINT: DeepSeek's trail of destruction and the new AI landscape
Preparing for AI: The AI Podcast for Everybody
1 hour 10 minutes
9 months ago
AI INFLECTION POINT: DeepSeek's trail of destruction and the new AI landscape
Send us a text What if the most sensational claims of AI breakthroughs are masking hidden truths? Join us as we explore an insane week of AI developments in the trail of DeepSeek's complete destruction of the AI landscape. We'll unravel the controveries behind it's training costs and consider how much we can trust the narrative shift the US side is trying to push . Alongside this, we'll weigh the implications of depending on large language models for critical information and the importance of...
Preparing for AI: The AI Podcast for Everybody
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...