
Welcome to the FINALE of our exclusive Series 1 on companion chatbots, where you’ll hear from guest speaker Henry Shevlin, a philosopher/“human who studies non-human consciousness.”
We’ll cover a whole host of items, but of course, our focus will be on companion chatbots, the dangers (and benefits!) of perceiving these AI agents as humanlike, a new term coined by Henry called “anthropomimesis” (making these things humanlike) and how this allows regulators to not just say “these damn users keep thinking these things are humanlike” but instead say “these […] companies keep making these things too humanlike.” We’ll talk about how social media got it wrong, and how social AI still has time to make things right (we hope)...and of course, we’ll talk about consciousness and how it’s important regardless of whether AI is indeed sentient.
In the last episode, you heard all about griefbots, which are AI replicas of a REAL person (living or deceased) that people create to continue a relationship after, for example, a breakup or a death. Atay Kozlovski, an ethicist, joined us to tell us all about his work on griefbots and the umbrella concept of “digital duplicates.” We talked about the various forms of digital duplicates and provided you with real-world examples of each, and then you heard us go into (of course) a deep dive on the risks and benefits of these bots, including tricky questions like consent and having funerals (!!!) for these types of AI companions.
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
Links to Henry’s work:
The anthropomimetic turn in contemporary AI
Consciousness, Machines, and Moral Status
All too human? Identifying and mitigating ethical risks of Social AI
Other mentioned articles:
Princeton Language+Intelligence Lab blog post