
Episode number: Q002
Title: AI assistants in a crisis of confidence: Why a 45% error rate jeopardizes quality journalism and our processes
The largest international study by EBU and BBC is a wake-up call for every publication and every process manager. 45% of all AI-generated news responses are incorrect, and with Google Gemini, the problem rate is as high as 76% – primarily due to massive source deficiencies. We take a look behind the numbers.
These errors are not a coincidence, but a systemic risk that is exacerbated by the toxic feedback loop: AI hallucinations are published without being checked and then cemented as fact by the next AI.
In this episode, we analyze the consequences for due diligence and truthfulness as fundamental pillars of journalism. We show why now is the time for internal process audits to establish human-verified quality control loops. It's not about banning technology, but about using AI's weaknesses to strengthen our own standards. Quality over speed.
A must for anyone who anchors processes, structure, and trust in digital content management.
(Note: This podcast episode was created with the support and structuring of Google's NotebookLM.)