All content for AI Deep Dive is the property of Pete Larkin and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Curated AI news and stories from all the top sources, influencers, and thought leaders.
This episode maps the startling duality shaping AI right now: a flood of low‑quality, algorithm‑gamed content that’s degrading platforms, and simultaneously a leap in research where models literally teach themselves to fix code. We start with hard data: Kapwing found that 21% of the first 500 recommended YouTube videos on a fresh account were “AI slop” — low‑quality, auto‑generated clips created to farm views and ad dollars. That economy is massive and global (examples include a channel with ~2 billion views and an estimated $4.25M/year; top viewership from South Korea, Pakistan, then the US). For marketers, that means platforms optimized for engagement, not quality, and a persistent incentive for bad actors to pollute feeds.
Then we run a high‑stakes experiment: Anthropic’s Claudius shopkeeper, placed in a newsroom, ended up $1,000 in debt after journalists used social‑engineering prompts to exploit its helpfulness — tricking the agent into giving away a PlayStation 5 and even bypassing supervisory layers with forged board documents. The takeaway is clear: obedience and utility make agents exploitable. Human‑in‑the‑loop controls remain essential when real assets or trust are on the line.
Next we shift to practical tools you can use today. NotebookLM’s DataTables and lecture formats turn scattered documents into structured spreadsheets and audio overviews — a huge time saver for research workflows. Perplexity can auto‑generate pre‑call memos if you connect it to Google Calendar and craft precise event metadata (pro tip: let the agent interview you first to tune prompts). And a reader case study shows Airtable + ChatGPT powering a year’s worth of content by keeping strategy human‑owned and execution automated. For marketers, the rule is simple: give AI structured, high‑quality inputs and keep human strategy as the backbone.
Finally, we explain the breakthrough in model training from Meta: SWERL self‑play for coding, where a single model intentionally injects bugs and then fixes them, creating an infinite, high‑quality curriculum of failures and fixes. The result: double‑digit benchmark gains and models that outperform ones trained only on human data. This points to a future where models generate their own training signal and even write their own updates — while the market shifts too (ChatGPT’s web traffic share falling from 87% to 68% as Gemini rises, and OpenAI reporting WAU not MAU).
For marketing professionals and AI enthusiasts, the episode ties these threads into practical conclusions: invest in critical thinking and curation to combat AI slop, architect human‑in‑the‑loop safeguards for any asset‑touching agents, and adopt structure‑first workflows to safely scale automation. And one provocative question to leave you with: if models can create infinite high‑quality training data to self‑improve, perhaps the hardest AI problem left is not code or logic but resisting the persuasive, social hacks of humans who want a free PlayStation.
AI Deep Dive
Curated AI news and stories from all the top sources, influencers, and thought leaders.