Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
All content for Preparing for AI: The AI Podcast for Everybody is the property of Matt Cartwright & Jimmy Rhodes and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
LLM STANDOFF: Jimmy and Matt breakdown the (current) Leading AI Chatbots
Preparing for AI: The AI Podcast for Everybody
1 hour 29 minutes
8 months ago
LLM STANDOFF: Jimmy and Matt breakdown the (current) Leading AI Chatbots
Send us a text The AI landscape is evolving weekly, with powerhouse models competing for dominance and your attention. But which one truly deserves a place in your workflow? We cut through the hype to deliver a practical guide to today's most capable AI systems. Elon Musk's Grok 3 emerges as the scientific powerhouse with real-time data synthesis through X integration, offering rapid analysis for STEM professionals but occasionally stirring controversy with its more permissive approach to co...
Preparing for AI: The AI Podcast for Everybody
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...