Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
All content for Preparing for AI: The AI Podcast for Everybody is the property of Matt Cartwright & Jimmy Rhodes and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
ARE WE NEARLY THERE YET? Artificial General Intelligence- Milestone or Mirage?
Preparing for AI: The AI Podcast for Everybody
1 hour 10 minutes
6 months ago
ARE WE NEARLY THERE YET? Artificial General Intelligence- Milestone or Mirage?
Send us a text What is AGI (Jimmy doesn't know). And what exactly makes artificial intelligence "general"? In this thought-provoking exploration of artificial general intelligence we dive deep into the definitions, debates, and potential futures surrounding this elusive milestone in AI development. The timing couldn't be more relevant. With OpenAI's latest models being hailed by some experts as showing "sparks of AGI," we examine whether we've truly reached a watershed moment in AI developme...
Preparing for AI: The AI Podcast for Everybody
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...