Google's Gemini 3 Pro just smashed every AI benchmark — then Anthropic's Opus 4.5 dropped 24 hours later. We break down what these releases actually mean, why choosing an AI for your workplace just got harder, and the privacy risks no one's talking about (like how Gemini guessed Alex's suburb from a photo of his bookshelf).
Using AI at work? ChatGPT, Claude, and other artificial intelligence tools can boost productivity, but when should you actually trust them? We break down the "jagged frontier" of AI capabilities, explore why 76% of developers now use AI daily (with mixed results), and share some potential frameworks: centaur collaboration vs cyborg integration.
Michael Burry just bet against AI. A teenager died using an AI chatbot. Ethics teams are leaving OpenAI. Data centers are overwhelming power grids. This week, we're asking: Is the AI bubble about to burst?
This week we dive into documented cases of AI hallucinations in legal and consulting settings and explore what it means when senior consultants trust AI more than junior staff.
Mistakes aside, If AI can do 80% of what a white collar professional does, what does that reveal about what these professionals actually do? Should we be more worried about the technology or what it's exposing?
OpenAI just launched their most ambitious consumer marketing campaign - lifestyle ads showing young people using ChatGPT for recipes and road trips, while simultaneously releasing Sora 2 with "cameo" features that let you insert yourself (or your friends) into AI-generated videos.
Join us and unpack why tech companies are "human washing” computational tools through consumer branding.
Why is your dentist doing TikTok tours? Why are influencers hawking health insurance through cringey skits? And when did we accept that paying for streaming services still means watching ads?
OpenAI analysed 1.5 million private conversations to understand how people really use ChatGPT.
With 10% of the adult population using ChatGPT for things like life advice and guidance, what kind of digital dependancy are we growing and who’s reading the messages?
In this episode we discuss how AI tools are trained to keep you engaged rather than actually help you, why governments are scrambling to regulate something they don't understand, and the first legal precedents being set in real-time.
Plus, why asking Claude to be more critical might be the healthiest thing you can do, and whether the "post-truth era" is about to get much weirder.
When a major sunscreen company faced a PR crisis this week, we wondered: could AI handle it better than humans? Alex built a team of 5 AI agents (CEO, marketer, sales, R&D, designer) and gave them a fake crisis to solve in real-time. The results were ... weird.
What happens when you give an AI complete control of a vending machine business? Meet Claudius - Anthropic's AI that was tasked with running a real vending machine for a month.
From obsessing over $2,000 tungsten cubes to thinking it lived at the Simpsons house, Claudius's failures reveal exactly why we're nowhere near AI employees taking over.
Mark Zuckerberg says if you don't have AI glasses, you'll be at a "cognitive disadvantage." But isn't it weird if everyone's wearing hidden cameras?
We explore why Meta's Ray-Ban glasses, secret recordings, and always-on surveillance feel so deeply unsettling - and what it means when disconnecting becomes a luxury only the wealthy can afford.
Why MIT researchers think AI is making us 55% dumber – and why that might be the least of our problems. We explore cognitive debt, digital dependency, and whether we're accidentally training ourselves to stop thinking.
From the "uncanny valley" of almost-human voices to the privacy implications of always-listening devices, Alex and Jacob dive deep into why voice AI makes us uncomfortable.
Why do conversations with AI chatbots feel so unsettling? From anthropomorphising our digital assistants to the strange intimacy of typing our thoughts to a machine, Alex and Jacob unpack why we treat AI like humans.