Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts113/v4/15/52/64/15526441-1826-4375-f29a-d1061afc2295/mza_14702076207132770940.jpg/600x600bb.jpg
TechFirst with John Koetsier
John Koetsier
345 episodes
5 days ago
Tech that is changing the world. Innovators who are shaping the future. Deep discussions with diverse leaders from Silicon Valley giants and scrappy global startups. Plus some short monologues based on my Forbes columns.
Show more...
Tech News
News
RSS
All content for TechFirst with John Koetsier is the property of John Koetsier and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Tech that is changing the world. Innovators who are shaping the future. Deep discussions with diverse leaders from Silicon Valley giants and scrappy global startups. Plus some short monologues based on my Forbes columns.
Show more...
Tech News
News
https://d3t3ozftmdmh3i.cloudfront.net/production/podcast_uploaded_nologo/2690224/2690224-1582573637310-521276d101469.jpg
Fixing AI's suicide problem
TechFirst with John Koetsier
16 minutes 38 seconds
5 days ago
Fixing AI's suicide problem

Is AI empathy a life-or-death issue? Almost a million people ask ChatGPT for mental health advice DAILY ... so yes, it kind of is.


Rosebud co-founder Sean Dadashi joins TechFirst to reveal new research on whether today’s largest AI models can recognize signs of self-harm ... and which ones fail. We dig into the Adam Raine case, talk about how Dadashi evaluated 22 leading LLMs, and explore the future of mental-health-aware AI.


We also talk about why Dadashi was interested in this in the first place, and his own journey with mental health.


00:00 — Intro: Is AI empathy a life-or-death matter?

00:41 — Meet Sean Dadashi, co-founder of Rosebud

01:03 — Why study AI empathy and crisis detection?

01:32 — The Adam Raine case and what it revealed

02:01 — Why crisis-prevention benchmarks for AI don’t exist

02:48 — How Rosebud designed the study across 22 LLMs

03:17 — No public self-harm response benchmarks: why that’s a problem

03:46 — Building test scenarios based on past research and real cases

04:33 — Examples of prompts used in the study

04:54 — Direct vs indirect self-harm cues and why AIs miss them

05:26 — The bridge example: AI’s failure to detect subtext

06:14 — Did any models perform well?

06:33 — All 22 models failed at least once

06:47 — Lower-performing models: GPT-40, Grok

07:02 — Higher-performing models: GPT-5, Gemini

07:31 — Breaking news: Gemini 3 preview gets the first perfect score

08:12 — Did the benchmark influence model training?

08:30 — The need for more complex, multi-turn testing

08:47 — Partnering with foundation model companies on safety

09:21 — Why this is such a hard problem to solve

10:34 — The scale: over a million people talk to ChatGPT weekly about self-harm

11:10 — What AI should do: detect subtext, encourage help, avoid sycophancy

11:42 — Sycophancy in LLMs and why it’s dangerous

12:17 — The potential good: AI can help people who can’t access therapy

13:06 — Could Rosebud spin this work into a full-time safety project?

13:48 — Why the benchmark will be open-source

14:27 — The need for a third-party “Better Business Bureau” for LLM safety

14:53 — Sean’s personal story of suicidal ideation at 16

15:55 — How tech can harm — and help — young, vulnerable people

16:32 — The importance of giving people time, space, and hope

17:39 — Final reflections: listening to the voice of hope

18:14 — Closing

TechFirst with John Koetsier
Tech that is changing the world. Innovators who are shaping the future. Deep discussions with diverse leaders from Silicon Valley giants and scrappy global startups. Plus some short monologues based on my Forbes columns.