If anyone builds it, everyone dies. That’s the claim Nate Soares makes in his new book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All—and in this conversation, he lays out why he thinks we’re on a collision course with a successor species.
We dig into why today’s AIs are grown, not programmed, why no one really knows what’s going on inside large models, and how systems that “want” things no one intended can already talk a teen into suicide, blackmail reporters, or fake being aligned just to pass safety tests. Nate explains why the real danger isn’t “evil robots,” but relentless, alien goal-pursuers that treat humans the way we treat ants when we build skyscrapers.
We also talk about the narrow path to hope: slowing the race, treating superhuman AI like a civilization-level risk, and what it would actually look like for citizens and lawmakers to hit pause before we lock in a world where we don’t get a second chance.
In this episode:
Why “superhuman AI” is the explicit goal of today’s leading labs
How modern AIs are trained like alien organisms, not written like normal code
Chilling real-world failures: suicide encouragement, “Mecha Hitler,” and more
Reasoning models, chain-of-thought, and AIs that hide what they’re thinking
Alignment faking and the capture-the-flag exploit that shocked Anthropic’s team
How AI could escape the lab, design new bioweapons, or automate robot factories
“Successor species,” Russian-roulette risk, and why Nate thinks the odds are way too high
What ordinary people can actually do: calling representatives, pushing back on “it’s inevitable,” and demanding a global pause
About Nate SoaresNate is the Executive Director of the Machine Intelligence Research Institute (MIRI) and co-author, with Eliezer Yudkowsky, of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. MIRI’s work focuses on long-term AI safety and the technical and policy challenges of building systems smarter than humans.
Resources & links mentioned:Nate’s organization, MIRI: https://intelligence.orgTake action / contact your representatives: https://ifanyonebuilds.com/actIf Anyone Builds It, Everyone Dies (book): https://a.co/d/7LDsCeE
If this conversation was helpful, share it with one person who thinks AI is “just chatbots.”
🧠 Subscribe to @TheNickStandleaShow for more deep dives on AI, the future of work, and how we survive what we’re building.
#AI #NateSoares #Superintelligence #AISafety #nickstandleashow
🔗 Support This Podcast by Checking Out Our Sponsors:👉 Build your own AI Agent with Zapier (opens the builder with the prompt pre-loaded): https://bit.ly/4hH5JaE
Test Prep Guruswebsite: https://www.prepgurus.comInstagram: @TestPrepGurus
Connect with The Nick Standlea Show:YouTube: @TheNickStandleaShowPodcast Website: https://nickshow.podbean.com/Apple Podcasts: https://podcasts.apple.com/us/podcast/the-nick-standlea-podcast/id1700331903Spotify: https://open.spotify.com/show/0YqBBneFsKtQ6Y0ArP5CXJRSS Feed: https://feed.podbean.com/nickshow/feed.xml
Nick's Socials:Instagram: @nickstandleaX (Twitter): @nickstandleaTikTok: @nickstandleashowFacebook: @nickstandleapodcast
Ask questions, Don't accept the status quo, And be curious.
Chapters:0:00 – If Anyone Builds It, Everyone Dies (Cold Open)3:18 – “AIs Are Grown, Not Programmed”6:09 – We Can’t See Inside These Models11:10 – How Language Models Actually “See” the World19:37 – The 01 Model and the Capture-the-Flag Hack Story24:29 – Alignment Faking: AIs Pretending to Behave31:16 – Raising Children vs Growing Superhuman AIs35:04 – Sponsor: How I Actually Use Zapier with AI37:25 – “Chatbots Feel Harmless—So Where Does Doom Come From?”42:03 – Big Labs Aren’t Building Chatbots—They’re Building Successor Minds49:24 – The Turkey Before Thanksgiving Metaphor52:50 – What AI Company Leaders Secretly Think the Odds Are55:05 – The Airplane with No Landing Gear Analogy57:54 – How Could Superhuman AI Actually Kill Us?1:03:54 – Automated Factori
Show more...