What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
All content for Code & Cure is the property of Vasanth Sarathy & Laura Hagopian and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...
#17 - How Multi-Agent Systems Could Reshape Care, From Wearables To Scheduling
Code & Cure
25 minutes
2 months ago
#17 - How Multi-Agent Systems Could Reshape Care, From Wearables To Scheduling
What if digital assistants could triage symptoms, schedule appointments, and coordinate rides—all while doctors focus on the human side of care? That’s the promise of multi-agent AI in healthcare. In this episode, we explore how these intelligent teams of agents are transforming both clinical and operational workflows. We begin by breaking down what an AI “agent” really is: not just a chatbot, but a goal-oriented system that can use tools, call APIs, and take real-world actions. You'll hear h...
Code & Cure
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical ju...