Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
All content for Preparing for AI: The AI Podcast for Everybody is the property of Matt Cartwright & Jimmy Rhodes and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...
TECH LORDS RISING: How Digital Feudalism threatens to make us all digital serfs
Preparing for AI: The AI Podcast for Everybody
57 minutes
7 months ago
TECH LORDS RISING: How Digital Feudalism threatens to make us all digital serfs
Send us a text Is our digital world becoming a feudal system where tech giants rule as lords and we serve as digital serfs? This provocative question forms the backbone of today's deep dive into "techno-feudalism" – a framework that helps explain the troubling concentration of power in our increasingly AI-driven society. The parallels are striking and sobering. Just as medieval peasants worked land they didn't own to benefit wealthy lords, we generate valuable data on platforms we don't cont...
Preparing for AI: The AI Podcast for Everybody
Send us a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “C...