
Tech billionaires warn that unaligned AI could destroy humanity… while pouring hundreds of billions into building it. Why?
In this week’s episode of The Plan, Simon and Ben dive deep into the strange psychology of the AI revolution — from Eliezer Yudkowsky’s AI doom theories to the Shoggoth meme, from Effective Altruism to Accelerationism. Are today’s AI Doomers and AI Boosters actually working together to fuel the same hype Mac hine? Is the fear of Superintelligence genuine — or just another marketing story from the tech elite?Join us as we explore whether AI safety, Silicon Valley ideology, and the dream of Superintelligence are part of a single narrative trap — one that’s shaping the future of humanity.
Available: 7 October 7pm GMT
💬 Topics Covered:AI safety • Eliezer Yudkowsky • Large Language Models (LLMs) • Silicon Valley culture • Effective Altruism • Accelerationism • Tech religion • Future of Humanity
🔖 Hashtags:#AI #AIDoom #AIBoom #FutureTech #LLMs #SiliconValley #TechPhilosophy #FutureofHumanity #ThePlanPodcast #Accelerationism #EffectiveAltruism #AIethics #Superintelligence #TechReligion #endtimes
Track SSTK_MUSIC_ID 1249147 – Monetization ID MONETIZATION_ID HWU4CA8YNNKAW4XM.