
Tech billionaires warn that unaligned AI could destroy humanity… while pouring hundreds of billions into building it. Why?In this week’s episode of The Plan, Simon and Ben dive deep into the strange psychology of the AI revolution — from Eliezer Yudkowsky’s AI doom theories to the Shoggoth meme, from Effective Altruism to Accelerationism. Are today’s AI Doomers and AI Boosters actually working together to fuel the same hype machine? Is the fear of Superintelligence genuine — or just another marketing story from the tech elite?Join us as we explore whether AI safety, Silicon Valley ideology, and the dream of Superintelligence are part of a single narrative trap — one that’s shaping the future of humanity.Book Recommendations (Affiliate links):At the Mountains of Madness: H.P. Lovecrafthttps://amzn.to/4o3bCAyIf Anyone Builds It, Everyone Dies: Eliezer Yudkowsky, Nate Soareshttps://amzn.to/4nD2qDvEmpire of AI: Karen Haohttps://amzn.to/3KutqXbThe AI Con: Emily M. Bender, Alex Hannahttps://amzn.to/4mTl2xPThe Three-Body Problem: Cixin Liuhttps://amzn.to/46UX5AkMore Everything Forever: Adam Beckerhttps://amzn.to/4pVhuxFLinks:The Illusion of thinking (Apple): https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdfGary Marcus Substack: https://substack.com/@garymarcusLess Wrong: https://www.lesswrong.comMIT Study: https://www.media.mit.edu/publications/your-brain-on-chatgpt/💬 Topics Covered:AI safety • Eliezer Yudkowsky • Large Language Models (LLMs) • Silicon Valley culture • Effective Altruism • Accelerationism • Tech religion • Future of Humanity🔖 Hashtags:#AI #AIDoom #AIBoom #FutureTech #LLMs #SiliconValley #TechPhilosophy #FutureofHumanity #ThePlanPodcast #Accelerationism #EffectiveAltruism #AIethics #Superintelligence #TechReligion #endtimes Track SSTK_MUSIC_ID 1249147 – Monetization ID MONETIZATION_ID HWU4CA8YNNKAW4XM.