What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
All content for Code & Cure is the property of Vasanth Sarathy & Laura Hagopian and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
#11 - The Smile Test: How AI Detects Parkinson's Disease
Code & Cure
27 minutes
3 months ago
#11 - The Smile Test: How AI Detects Parkinson's Disease
Can a smile reveal the early signs of Parkinson’s disease? New research suggests it can—and AI is making that detection possible. Scientists are training machine learning systems to spot subtle facial changes associated with Parkinson’s, particularly in how we smile. These early signs, often missed by the human eye, could hold the key to faster, more accessible diagnosis. Parkinson’s typically presents with tremors, muscle rigidity, and slowed movement. But it also affects facial muscles, lea...
Code & Cure
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...