Welcome to Episode 2 of our exclusive Series 2 on the impact of generative AI technologies on children, teens, and young people. In this series, we’ll cover news and research on AI toys, the use of AI in education, and AI’s social and cognitive impacts on one of the most vulnerable subsets of AI users.
In this deep-dive episode, we’ll cover recent news and research on generative AI toys and the use of AI in education, providing you with some insane metrics about the AI toy industry in China, the UK, and the US and OpenAI and Mattel’s deal with Barbies and Hot Wheels. We’ll also cover research and news on the new AI toys Curio and Cayla, and discuss the real and potential harms of these new products.
On the education side (36:25), we’ll talk about metrics of student use of AI for schoolwork, studies on how the use of AI impacts brain activity and memory and student GPA and learning, the Alpha School and the Google Effect, research that talks about what people truly want AI to replace - and it’s not creativity or critical thinking, despite where AI’s being applied.
With all that being said, we’ll also discuss how this all fits into the question of what it means to be human and what it means to replace that - especially in terms of parenting, teaching, and creative work.
-
GENERAL TRIGGER WARNINGS: Our show features sensitive content, including mentions of suicide, self-harm, mental health, and sexual harassment and sextortion. Our developing lives with bots renders these subjects front-of-mind in our discussions, and we want viewers to be aware of this as they follow along.
-
In our last episode, we laid the foundation for the series, covering the ongoing court cases around AI-induced suicide for (now) multiple teen users, Alpha School, the aim of big tech to predict the age of their users to “protect” vulnerable populations like children and youth, people’s perspectives on AI use in the classroom, and a truly interesting new AI toy called “Curio,” co-founded by Grimes that has a generative AI stuffed animal called Grok (!!!), “not to be confused” with Elon Musk’s Grok chatbot on X.
-
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
Links:
Welcome to our launch episode for Series X - “What’s the (AI) Hype?” - where we intermittently discuss what’s been happening in the world of artificial intelligence in the past few weeks.
Today we’ll be discussing all the AI hype that’s been thrown out into the world in the past weeks, covering new AI products, experiences, and companies (including exactly what Angy mentioned in our recording with Henry: “is there room at the restaurant for my AI companion?” well, now there is! - and the deeply disturbing new impossibility of recognizing AI-generated images (or AI slop from not when AI is not slop) with Gemini’s new Nano Banana Pro image generator); new media coverage on the mental health harms of AI, particularly with losing touch with reality and loss of relationships, and how OpenAI has been well-aware of it since 2020; recent legal and reputational hand-slaps on new AI products (remember that cute teddy bear FOLO toy we talked about? Well, apparently it’s kinky - explicitly so - with children, according to a new report done by the US Public Interest Research Group); some new policy moves with AI (the good and the concerning); and of course, some new research and a bit of “well, then, how are we supposed to use these things responsibly?” thrown in the mix.
In this episode, we cover:
New AI products, experiences, and companies
ChatGPT group chat (Nov 13)
Gemini’s Nano Banana Pro (Nov 20)
Anthropic partners with Iceland (Nov 4) and Rwanda (Nov 17) for AI in education
Time for dinner dates… with Eva AI (Nov 18 announced, starting in Dec)
Medical startup Akido using LLM for appts and diagnoses (Sept 22)
New mental health harm coverage
NYT Users lost touch with reality (Nov 23)
New legal and reputational slaps on existing AI products
FOLO Toy by Futurism and CNN (Nov 13 - Public Interest Research Group report)
New policy moves
Trump executive order to limit state regulation of AI (Nov 19, Nov 21)
Young People’s Alliance, etc. signed humanlike AI policy framework (Nov 21)
-
GENERAL TRIGGER WARNINGS: Our show features sensitive content, including mentions of suicide, self-harm, mental health, and sexual harassment and sextortion. Our developing lives with bots renders these subjects front-of-mind in our discussions, and we want viewers to be aware of this as they follow along.
-
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
Welcome to the FIRST EPISODE of our exclusive SERIES 2 on the impact of generative AI technology on children, teens, and young people. In this series, we’ll cover news and research on AI toys, the use of AI in education, and AI’s social and cognitive impacts on one of the most vulnerable subsets of AI users.
In this introductory episode, we’ll lay the foundation for future episodes on this topic. Hear our take on the ongoing court cases around AI-induced suicide for (now) multiple teen users, this new AI school called…(believe it or not)...Alpha School, the aim of big tech to predict the age of their users to “protect” young users, people’s perspectives on AI use in the classroom, and a truly interesting new AI toy called “Curio,” co-founded by Grimes that has a generative AI stuffed animal called Grok (!!!), apparently “not to be confused” with Elon Musk’s Grok chatbot on X.
-
GENERAL TRIGGER WARNINGS: Our show features sensitive content, including mentions of suicide, self-harm, mental health, and sexual harassment and sextortion. Our developing lives with bots renders these subjects front-of-mind in our discussions, and we want viewers to be aware of this as they follow along.
-
In our last series, you heard all about companion chatbots, why people use them, the psychological and ethical implications of dependence on AI, perceiving AI as humanlike (anthropomorphism) and its consequences on real human relationships, griefbots and digital duplicates (also known as “dead bots”), and the tricky tension between users perceiving AI as conscious and developers making them seem as if they are or can be by design.
-
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
Links:
Cases of teen su*cide: characterAI and OpenAI
Welcome to the FINALE of our exclusive Series 1 on companion chatbots, where you’ll hear from guest speaker Henry Shevlin, a philosopher/“human who studies non-human consciousness.”
We’ll cover a whole host of items, but of course, our focus will be on companion chatbots, the dangers (and benefits!) of perceiving these AI agents as humanlike, a new term coined by Henry called “anthropomimesis” (making these things humanlike) and how this allows regulators to not just say “these damn users keep thinking these things are humanlike” but instead say “these […] companies keep making these things too humanlike.” We’ll talk about how social media got it wrong, and how social AI still has time to make things right (we hope)...and of course, we’ll talk about consciousness and how it’s important regardless of whether AI is indeed sentient.
In the last episode, you heard all about griefbots, which are AI replicas of a REAL person (living or deceased) that people create to continue a relationship after, for example, a breakup or a death. Atay Kozlovski, an ethicist, joined us to tell us all about his work on griefbots and the umbrella concept of “digital duplicates.” We talked about the various forms of digital duplicates and provided you with real-world examples of each, and then you heard us go into (of course) a deep dive on the risks and benefits of these bots, including tricky questions like consent and having funerals (!!!) for these types of AI companions.
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
Links to Henry’s work:
The anthropomimetic turn in contemporary AI
Consciousness, Machines, and Moral Status
All too human? Identifying and mitigating ethical risks of Social AI
Other mentioned articles:
Princeton Language+Intelligence Lab blog post
Welcome to Episode 3 for our exclusive Series 1 on companion chatbots, where you’ll hear from our first guest speaker, Atay Kozlovski.
In this episode, you’ll hear all about griefbots, which are AI replicas of a REAL person (living or deceased) that people create to continue a relationship after, for example, a breakup or a death. Atay Kozlovski, an ethicist, is joining us to tell us all about his work on griefbots and the umbrella concept of “digital duplicates.” We’ll talk about the various forms of digital duplicates and provide you with real-world examples of each, and then you’ll hear us go into (of course) a deep dive on the risks and benefits of these bots, including tricky questions like consent and having funerals (!!!) for these types of AI companions.
Please make note of the sensitive content covered in brief within this episode.
TRIGGER WARNINGS:
11:30-12:20 - SA and deepfake pornography mentioned
24:30-24:45 - suicide mentioned
In the final episode in this series, you'll hear from another guest speaker - one who will talk about anthropomorphism (thinking these things are humanlike) and even - drumroll - consciousness!
In the last episode, we covered research and news on companion chatbots. We covered early research in psychology, human-computer interaction, and business. Then, we gave you a sense of some of the recent news releases on human-chatbot relationships - like Meta’s regulatory report, the man who ~thought~ he was going to visit a real-life person, this weird new companion called “Friend” (it’s like an airtag around your neck - but also surveillance - but your friend?), and more.
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
Links to Atay’s & colleagues’ work:
Welcome to Episode 2 for our exclusive Series 1 on companion chatbots.
In this episode, you'll hear all about the research and news (59:00) on companion chatbots. We’ll cover early research in psychology, human-computer interaction, and business. Then, you’ll get a sense of some of the recent news releases on human-chatbot relationships - like Meta’s regulatory report, the man who ~thought~ he was going to visit a real-life person, this weird new companion called “Friend” (it’s like an Airtag around your neck - but also surveillance - but your friend?), and more.
In the next episodes in this series, you'll hear from two guest speakers - one who will give you a deep dive on griefbots (what even are those??), and another who will talk about anthropomorphism (thinking these things are humanlike) and even - drumroll - consciousness!
In the last episode, we covered: What are companion chatbots? Why do people use them? What are the most popular companion chatbots out there, and what are some of the benefits and risks of having relationships with artificial intelligence agents?
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
Links to the research and articles we covered:
[Research]
Xie & Pentina (2022) Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika
Guingrich & Graziano (2025) Chatbots as Social Companions: How People Perceive Consciousness, Human Likeness, and Social Health Benefits in Machines
Guingrich & Graziano (2024) Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction
Maples et al. (2024) Loneliness and suicide mitigation for students using GPT3-enabled chatbots
De Freitas et al. (2025) AI Companions Reduce Loneliness - open-access preprint
[News]
Welcome to the official introduction to our exclusive Series 1 on companion chatbots.What are companion chatbots? Why do people use them? What are the most popular companion chatbots out there, and what are some of the benefits and risks of having relationships with artificial intelligence agents?In the next episodes in this series, you'll hear all about the research and news on companion chatbots, plus hear from two guest speakers - one who will give you a deep dive on griefbots (what even are those??), and another who will talk about anthropomorphism (thinking these things are humanlike) and even - drumroll - consciousness!This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you. You can learn more about us at ourliveswithbots.com.
This is Our Lives With Bots, the show where we ask important, timely questions about what it means to live with our bot counterparts. From time to time, we also dive deep into what an AI future might look like for us. Sometimes we agree, sometimes we spiral, but we always go deep.
Rose and Angy are psychologists with degrees in psychology, artificial intelligence, and ethics. They have conducted research in human-AI interaction and created this podcast to make information about AI accessible to you! You can learn more about us at ourliveswithbots.com.