Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Technology
Sports
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/05/2f/cf/052fcf47-3a31-61d8-6bc7-93567ff36881/mza_10814906668889180020.jpg/600x600bb.jpg
2021 MIRI Conversations
Peter Barnett
13 episodes
3 hours ago
These are AI generated podcasts of the 2021 MIRI Conversations https://www.lesswrong.com/s/n945eovrA3oDueqtq This podcast is a personal project because I like listening to audio, and there weren't good audio versions of the conversations. Please remember that these conversations are from 2021.
Show more...
Philosophy
Society & Culture
RSS
All content for 2021 MIRI Conversations is the property of Peter Barnett and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
These are AI generated podcasts of the 2021 MIRI Conversations https://www.lesswrong.com/s/n945eovrA3oDueqtq This podcast is a personal project because I like listening to audio, and there weren't good audio versions of the conversations. Please remember that these conversations are from 2021.
Show more...
Philosophy
Society & Culture
Episodes (13/13)
2021 MIRI Conversations
Shah and Yudkowsky on alignment failures

This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn.

The discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published in the Alignment Newsletter.

This was originally posted on 28th Feb 2022.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/tcCxPLBrEXdxN5HCQ

Show more...
3 months ago
2 hours 45 minutes 52 seconds

2021 MIRI Conversations
Christiano and Yudkowsky on AI predictions and human intelligence

This is a transcript of a conversation between Paul Christiano and Eliezer Yudkowsky, with comments by Rohin Shah, Beth Barnes, Richard Ngo, and Holden Karnofsky, continuing the Late 2021 MIRI Conversations.

This was originally posted on 23rd Feb 2022.

https://www.lesswrong.com/posts/NbGmfxbaABPsspib7/christiano-and-yudkowsky-on-ai-predictions-and-human

Show more...
3 months ago
1 hour 12 minutes 41 seconds

2021 MIRI Conversations
Ngo and Yudkowsky on scientific reasoning and pivotal acts

This is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the Late 2021 MIRI Conversations sequence, following Ngo's view on alignment difficulty.

This was originally posted on 21st Feb 2022.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/cCrpbZ4qTCEYXbzje

Show more...
3 months ago
1 hour 1 minute

2021 MIRI Conversations
Ngo's view on alignment difficulty

This post features a write-up by Richard Ngo on his views, with inline comments.

This was originally posted on 14th Dec 2021.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/gf9hhmSvpZfyfS34B

Show more...
3 months ago
34 minutes 57 seconds

2021 MIRI Conversations
Conversation on technology forecasting and gradualism

This post is a transcript of a multi-day discussion between Paul Christiano, Richard Ngo, Eliezer Yudkowsky, Rob Bensinger, Holden Karnofsky, Rohin Shah, Carl Shulman, Nate Soares, and Jaan Tallinn, following up on the Yudkowsky/Christiano debate in 1, 2, 3, and 4.

This was originally posted on 9th Dec 2021.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/nPauymrHwpoNr6ipx

Show more...
3 months ago
1 hour 23 seconds

2021 MIRI Conversations
More Christiano, Cotra, and Yudkowsky on AI progress

This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky (with some comments from Rob Bensinger, Richard Ngo, and Carl Shulman), continuing from 1, 2, and 3.

This was originally posted on 6th Dec 2021.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/fS7Zdj2e2xMqE6qja

Show more...
3 months ago
1 hour 10 minutes 3 seconds

2021 MIRI Conversations
Shulman and Yudkowsky on AI progress

This post is a transcript of a discussion between Carl Shulman and Eliezer Yudkowsky, following up on a conversation with Paul Christiano and Ajeya Cotra.



https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/sCCdCLPN9E3YvdZhj

Show more...
3 months ago
36 minutes 59 seconds

2021 MIRI Conversations
Christiano, Cotra, and Yudkowsky on AI progress

This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and Eliezer's "Takeoff Speeds" discussion.

This was originally posted on 25th Nov 2021.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7MCqRnZzvszsxgtJi

Show more...
3 months ago
2 hours 7 minutes 46 seconds

2021 MIRI Conversations
Soares, Tallinn, and Yudkowsky discuss AGI cognition

This is a collection of follow-up discussions in the wake of Richard Ngo and Eliezer Yudkowsky's Sep. 5–8 and Sep. 14 conversations.

This was originally posted on 29th Nov 2021.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/oKYWbXioKaANATxKY

Show more...
3 months ago
1 hour 14 minutes 47 seconds

2021 MIRI Conversations
Yudkowsky and Christiano discuss "Takeoff Speeds"

This is a transcription of Eliezer Yudkowsky responding to Paul Christiano's Takeoff Speeds live on Sep. 14, followed by a conversation between Eliezer and Paul. This discussion took place after Eliezer's conversation with Richard Ngo.

This was originally posted on 22nd Nov 2021.

https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/vwLxd6hhFvPbvKmBH


Show more...
3 months ago
1 hour 47 minutes 13 seconds

2021 MIRI Conversations
Ngo and Yudkowsky on AI capability gains

This is the second post in a series of transcribed conversations about AGI forecasting and alignment.

This was first posted on 18th Nov 2021.

https://www.lesswrong.com/posts/hwxj4gieR7FWNwYfa/ngo-and-yudkowsky-on-ai-capability-gains-1

Show more...
3 months ago
1 hour 12 minutes 20 seconds

2021 MIRI Conversations
Ngo and Yudkowsky on alignment difficulty

This post is the first in a series of transcribed Discord conversations between Richard Ngo and Eliezer Yudkowsky, moderated by Nate Soares.

This was originally posted on 15th Nov 2021.

https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty

Show more...
3 months ago
1 hour 57 minutes 49 seconds

2021 MIRI Conversations
Discussion with Eliezer Yudkowsky on AGI interventions

The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous".

This was originally posted on 10th Nov 2021.

https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions

Show more...
3 months ago
1 hour 2 minutes

2021 MIRI Conversations
These are AI generated podcasts of the 2021 MIRI Conversations https://www.lesswrong.com/s/n945eovrA3oDueqtq This podcast is a personal project because I like listening to audio, and there weren't good audio versions of the conversations. Please remember that these conversations are from 2021.