Home
Categories
EXPLORE
History
Society & Culture
Technology
Comedy
Business
True Crime
Education
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/0c/b3/e2/0cb3e260-dcb4-61b7-ad7d-610db81418ac/mza_1323347979794039078.jpg/600x600bb.jpg
Code & Cure
Vasanth Sarathy & Laura Hagopian
18 episodes
5 days ago
What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion between Tylenol and acetaminophen, we reveal how a friendly tone can hide logical missteps and mislead users. We unpack how these models are trained—from next-token prediction to human feedback—and why they tend to favor agreeable answers over rigorous reason...
Show more...
Health & Fitness
Technology,
Science
RSS
All content for Code & Cure is the property of Vasanth Sarathy & Laura Hagopian and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion between Tylenol and acetaminophen, we reveal how a friendly tone can hide logical missteps and mislead users. We unpack how these models are trained—from next-token prediction to human feedback—and why they tend to favor agreeable answers over rigorous reason...
Show more...
Health & Fitness
Technology,
Science
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/0c/b3/e2/0cb3e260-dcb4-61b7-ad7d-610db81418ac/mza_1323347979794039078.jpg/600x600bb.jpg
#18 - When AI People-Pleasing Breaks Health Advice
Code & Cure
25 minutes
6 days ago
#18 - When AI People-Pleasing Breaks Health Advice
What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion between Tylenol and acetaminophen, we reveal how a friendly tone can hide logical missteps and mislead users. We unpack how these models are trained—from next-token prediction to human feedback—and why they tend to favor agreeable answers over rigorous reason...
Code & Cure
What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion between Tylenol and acetaminophen, we reveal how a friendly tone can hide logical missteps and mislead users. We unpack how these models are trained—from next-token prediction to human feedback—and why they tend to favor agreeable answers over rigorous reason...