What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion between Tylenol and acetaminophen, we reveal how a friendly tone can hide logical missteps and mislead users. We unpack how these models are trained—from next-token prediction to human feedback—and why they tend to favor agreeable answers over rigorous reason...
All content for Code & Cure is the property of Vasanth Sarathy & Laura Hagopian and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion between Tylenol and acetaminophen, we reveal how a friendly tone can hide logical missteps and mislead users. We unpack how these models are trained—from next-token prediction to human feedback—and why they tend to favor agreeable answers over rigorous reason...
#1 - Eye Spy with My AI: Tackling Diabetic Retinopathy
Code & Cure
28 minutes
4 months ago
#1 - Eye Spy with My AI: Tackling Diabetic Retinopathy
What if a simple photograph of your eye could prevent blindness? Diabetic retinopathy silently steals vision from millions worldwide, yet it's treatable when caught early. The challenge? Too few specialists, limited access to care, and not enough awareness about this serious complication of diabetes. We dive deep into how artificial intelligence is transforming this landscape by analyzing retinal photos with remarkable accuracy. Through neural networks trained on thousands of eye images, thes...
Code & Cure
What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion between Tylenol and acetaminophen, we reveal how a friendly tone can hide logical missteps and mislead users. We unpack how these models are trained—from next-token prediction to human feedback—and why they tend to favor agreeable answers over rigorous reason...