Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/d8/b7/27/d8b72741-4a96-73a6-e98e-c6c2402e48ec/mza_11654084090888999774.jpg/600x600bb.jpg
LessWrong (Curated & Popular)
LessWrong
720 episodes
2 days ago
On Thinkish, Neuralese, and the End of Readable Reasoning In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought: Pardon? This looks like someone had a stroke during a meeting they didn’t want to be in, but their hand kept taking notes. That transcript comes from a recent paper published by researchers at Apollo Research and OpenAI on catching AI systems scheming. To understand what's happen...
Show more...
Technology
Society & Culture,
Philosophy
RSS
All content for LessWrong (Curated & Popular) is the property of LessWrong and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
On Thinkish, Neuralese, and the End of Readable Reasoning In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought: Pardon? This looks like someone had a stroke during a meeting they didn’t want to be in, but their hand kept taking notes. That transcript comes from a recent paper published by researchers at Apollo Research and OpenAI on catching AI systems scheming. To understand what's happen...
Show more...
Technology
Society & Culture,
Philosophy
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/d8/b7/27/d8b72741-4a96-73a6-e98e-c6c2402e48ec/mza_11654084090888999774.jpg/600x600bb.jpg
“Alignment remains a hard, unsolved problem” by null
LessWrong (Curated & Popular)
23 minutes
1 month ago
“Alignment remains a hard, unsolved problem” by null
Thanks to (in alphabetical order) Joshua Batson, Roger Grosse, Jeremy Hadfield, Jared Kaplan, Jan Leike, Jack Lindsey, Monte MacDiarmid, Francesco Mosconi, Chris Olah, Ethan Perez, Sara Price, Ansh Radhakrishnan, Fabien Roger, Buck Shlegeris, Drake Thomas, and Kate Woolverton for useful discussions, comments, and feedback. Though there are certainly some issues, I think most current large language models are pretty well aligned. Despite its alignment faking, my favorite is probably Claude ...
LessWrong (Curated & Popular)
On Thinkish, Neuralese, and the End of Readable Reasoning In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought: Pardon? This looks like someone had a stroke during a meeting they didn’t want to be in, but their hand kept taking notes. That transcript comes from a recent paper published by researchers at Apollo Research and OpenAI on catching AI systems scheming. To understand what's happen...