What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
All content for Code & Cure is the property of Vasanth Sarathy & Laura Hagopian and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
#9 - Ambient Documentation Tech: Reducing Burnout or Creating New Problems?
Code & Cure
28 minutes
3 months ago
#9 - Ambient Documentation Tech: Reducing Burnout or Creating New Problems?
AI is writing medical notes, but can doctors trust what it creates? Burnout is quietly eroding the medical workforce—and documentation overload is a major culprit. Physicians now spend nearly half their workday writing notes instead of treating patients, pushing many to the brink of exhaustion. Could artificial intelligence offer a lifeline? In this episode, we explore ambient documentation technology (ADT)—AI tools that automatically generate clinical notes by listening to patient-doctor con...
Code & Cure
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...