Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
All content for Preparing for AI: The AI Podcast for Everybody is the property of Matt Cartwright & Jimmy Rhodes and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
PROPHET PROFITS LOSSES: Jimmy and Matt revisit their summer 2024 AI Predictions
Preparing for AI: The AI Podcast for Everybody
51 minutes
4 months ago
PROPHET PROFITS LOSSES: Jimmy and Matt revisit their summer 2024 AI Predictions
Send us a text Have you ever wondered how accurate Jimmy and Matt's AI predictions about AI really are? Probably not, but in in this revealing episode, they are going to do it anyway. Revisiting some pretty wild forecasts from summer 2024 they see what they got right, what they missed, and what surprised them along the way. If nothing else it shows how rapidly AI is changing and how decades of deveopment are being condensed into months. The duo dive into language model development, finding t...
Preparing for AI: The AI Podcast for Everybody
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...