Home
Categories
EXPLORE
Education
Business
Society & Culture
News
True Crime
Music
Comedy
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/c0/c1/cc/c0c1cc46-def3-ec22-28b9-acc0c036cacb/mza_3297504708566811701.png/600x600bb.jpg
AXRP - the AI X-risk Research Podcast
Daniel Filan
59 episodes
2 days ago
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Show more...
Technology
Science
RSS
All content for AXRP - the AI X-risk Research Podcast is the property of Daniel Filan and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Show more...
Technology
Science
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/c0/c1/cc/c0c1cc46-def3-ec22-28b9-acc0c036cacb/mza_3297504708566811701.png/600x600bb.jpg
New Patreon tiers + MATS applications
AXRP - the AI X-risk Research Podcast
5 minutes 32 seconds
1 year ago
New Patreon tiers + MATS applications
AXRP - the AI X-risk Research Podcast
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.