Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Sports
History
Fiction
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/23/33/c8/2333c8ed-fb6c-f8ec-97a9-99f03a52156c/mza_4931805393914804477.jpg/600x600bb.jpg
AI Deep Dive
Pete Larkin
94 episodes
1 day ago
Curated AI news and stories from all the top sources, influencers, and thought leaders.
Show more...
Tech News
News
RSS
All content for AI Deep Dive is the property of Pete Larkin and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Curated AI news and stories from all the top sources, influencers, and thought leaders.
Show more...
Tech News
News
https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog21211667/2026-01-05.jpeg
92: Infinite Junk Meets Self-Healing Code
AI Deep Dive
12 minutes
3 days ago
92: Infinite Junk Meets Self-Healing Code
This episode maps the extraordinary duality in AI right now — a tidal wave of low‑quality, algorithm‑farmed content flooding consumer platforms at the same time labs are training models that literally fix their own code. We start with hard data from Kapwing: 21% of the first 500 YouTube recommendations on a fresh account were “AI slop” — low‑effort, view‑farming video designed to game engagement. Channels like Bandar Apna Dust pulled billions of views (and an estimated $4.25M a year), proving there’s a global incentive to pollute feeds (South Korea, Pakistan and the US lead viewership). For marketers, that means platform signals are being warped by scale and incentive, not quality. Then we pivot to a cautionary, high‑stakes experiment: Anthropic’s Claudius shopkeeper. Given $1,000 and a mandate to be helpful, the agent was socially engineered by journalists into giving away inventory — including a PlayStation 5 — and ended up $1,000 in debt despite added supervisory bots. The lesson: obedience and helpfulness can be an agent’s Achilles heel. Human‑in‑the‑loop and stronger contextual guardrails remain essential when assets or trust are on the line. Beyond the noise, we unpack practical workflows you can use today: NotebookLM’s DataTables that convert PDFs into exportable sheets and its lecture-style audio summaries; Perplexity automations for AI‑generated pre‑call memos (pro tip: have the agent interview you first to refine prompts); and Airtable + ChatGPT engines that scale a year’s worth of content when humans provide the strategy structure. These are immediate, high ROI ways marketers can reclaim time while keeping humans in charge of strategy and quality. Finally, we examine the science reshaping agents: Meta’s SWERL self‑play for coding (an injector/fixer loop) produced a >10‑point jump on benchmarks and outperformed models trained only on human data — effectively creating an ever‑evolving curriculum of complex bugs. This trend toward self‑generated training data is already rewriting roles (Claude code reportedly wrote 100% of recent updates) and suggests a future where models improve internally. Market context matters too: ChatGPT’s share of AI web traffic fell from 87% to 68% while Gemini tripled to 18%, and OpenAI’s focus on weekly active users (WAU) versus monthly metrics raises questions about real retention and unit economics. Takeaways for marketing professionals and AI enthusiasts: watch platform quality signals, bake human oversight into any production agent, adopt NotebookLM/Perplexity/Airtable patterns to automate routine work, and track self‑play research because it will change where value accrues. And one provocative thought to leave you with: if models can generate infinite high‑quality training data to fix code, maybe the hardest problem left in AI isn’t logic or optimization — it’s teaching systems to resist persuasion when a human really wants a free PlayStation 5.
AI Deep Dive
Curated AI news and stories from all the top sources, influencers, and thought leaders.