
This is my conversation with Michael Nielsen, scientist, author, and research fellow at the Astera Institute.
Timestamps:
- 00:00:00 intro
- 00:01:06 cultivating optimism amid existential risks
- 00:07:16 asymmetric leverage
- 00:12:09 are "unbiased" models even feasible?
- 00:18:44 AI and the scientific method
- 00:23:23 unlocking AI's full power through better interfaces
- 00:30:33 sponsor: Splits
- 00:31:18 AIs, independent agents or intelligent tools?
- 00:35:47 autonomous military and weapons
- 00:42:14 finding alignment
- 00:48:28 aiming for specific moral outcomes with AI?
- 00:54:42 freedom/progress vs safety
- 00:57:46 provable beneficiary surveillance
- 01:04:16 psychological costs
- 01:12:40 the ingenuity gap
Links:
- Michael Nielsen: https://michaelnielsen.org/
- Michael Nielsen on X: https://x.com/michael_nielsen
- Michael's essay on being a wise optimist about science and technology: https://michaelnotebook.com/optimism/
- Michael's Blog: https://michaelnotebook.com/
- The Ingenuity Gap (Tad Homer-Dixon): https://homerdixon.com/books/the-ingenuity-gap/
Thank you to our sponsor for making this podcast possible:
- Splits: https://splits.org
Into the Bytecode:
- Sina Habibian on X: https://twitter.com/sinahab
- Sina Habibian on Farcaster: https://warpcast.com/sinahab
- Into the Bytecode: https://intothebytecode.com
Disclaimer: This podcast is for informational purposes only. It is not financial advice nor a recommendation to buy or sell securities. The host and guests may hold positions in the projects discussed.