Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/0c/b3/e2/0cb3e260-dcb4-61b7-ad7d-610db81418ac/mza_1323347979794039078.jpg/600x600bb.jpg
Code & Cure
Vasanth Sarathy & Laura Hagopian
25 episodes
1 week ago
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
Show more...
Health & Fitness
Technology,
Science
RSS
All content for Code & Cure is the property of Vasanth Sarathy & Laura Hagopian and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
Show more...
Health & Fitness
Technology,
Science
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/0c/b3/e2/0cb3e260-dcb4-61b7-ad7d-610db81418ac/mza_1323347979794039078.jpg/600x600bb.jpg
#6 - AI Chatbots Gone Wrong
Code & Cure
27 minutes
4 months ago
#6 - AI Chatbots Gone Wrong
What if a chatbot designed to support recovery instead encouraged the very behaviors it was meant to prevent? In this episode, we unravel the cautionary saga of Tessa, a digital companion built by the National Eating Disorder Association to scale mental health support during the COVID-19 surge—only to take a troubling turn when powered by generative AI. At first, Tessa was a straightforward rules-based helper, offering pre-vetted encouragement and resources. But after an AI upgrade, users beg...
Code & Cure
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...