All content for AI Deep Dive is the property of Pete Larkin and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Curated AI news and stories from all the top sources, influencers, and thought leaders.
Big tech is betting trillions on compute as if capacity alone will buy AGI—OpenAI's new $38 billion AWS compute deal sits inside a reported $1.4 trillion infrastructure plan, Microsoft is locking down billions in chips and data centers, and startups like Lambda are lining up the newest Nvidia hardware. That hardware rush is already forcing rapid adoption: Coca‑Cola cut a year-long ad production cycle to 30 days using fully AI‑generated holiday spots, and Cognizant is rolling Anthropic’s Claude out to 350,000 employees. But the ground truth is sobering. The new Remote Labor Index tested 240 real client assignments across 23 categories and found leading models completed professional‑grade work less than 3% of the time—failures were often practical (broken files, incomplete handoffs), not theoretical. At the same time, creators are pushing back over unauthorized training data, exposing legal and ethical friction beneath the rush. There are clear, immediate wins—Slack Enterprise Search, Copilot as an interactive tutor, meeting automation—but the big gap remains: GPUs are accelerating capability, not yet reliably coordinating multi‑step, client‑ready deliverables. With companies predicting research‑automation leaps within months, the episode ends with a provocative question for marketers and creators: are you still writing for human eyeballs today, or are you already shaping the training data for the learning systems of tomorrow?
AI Deep Dive
Curated AI news and stories from all the top sources, influencers, and thought leaders.