What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
All content for Code & Cure is the property of Vasanth Sarathy & Laura Hagopian and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
#15 - When Algorithms Know Your End-Of-Life Wishes Better Than Loved Ones
Code & Cure
23 minutes
2 months ago
#15 - When Algorithms Know Your End-Of-Life Wishes Better Than Loved Ones
What if the person who knows you best isn’t the best person to speak for you when it matters most? We explore a study that tested just that—comparing the CPR preferences predicted by loved ones with those predicted by machine learning. The result? Algorithms got it right more often. That surprising outcome raises tough, important questions: Why do partners misjudge? And could AI really support life-and-death decisions when seconds count? We unpack the study’s approach in everyday terms: who...
Code & Cure
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...