Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
All content for Preparing for AI: The AI Podcast for Everybody is the property of Matt Cartwright & Jimmy Rhodes and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
EMERGENCY PODCAST: DeepSeek shakes the foundation of AI
Preparing for AI: The AI Podcast for Everybody
54 minutes
9 months ago
EMERGENCY PODCAST: DeepSeek shakes the foundation of AI
Send us a text Jimmy and Matt scramble into the AI bunker to sound the alarm with an urgent, emergency episode. In the past week an emerging model from a relatively unknown Chinese developer, DeepSeek, has redefining the entire landscape of AI. At the same time OpenAI, Donald Trump and a bunch of greedy transhumanist were announcing $600bn to build massive brute force compute data centres based on existing LLM architecture. With its open-source approach and a completely new method of infe...
Preparing for AI: The AI Podcast for Everybody
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...