
Episode number: Q003
Title: AI-to-AI bias: The new discrimination that is dividing our economy
A new, explosive study by PNAS reveals a bias that could fundamentally change our working world: AI-to-AI bias. Large language models (LLMs) such as GPT-4 systematically favor content created by other AI systems over human-written texts – in some tests with a preference of up to 89%.
We analyze the consequences of this technology-induced inequality:
The “LLM tax”: How is a new digital divide emerging between those who can afford premium AI and those who cannot?
High-risk systems: Why do applicant tracking systems and automated procurement tools need to be tested immediately for this bias against human authenticity?
Structural marginalization: How does bias lead to the systematic disadvantage of human economic actors?
We show why “human-in-the-loop” and ethical guidelines are now mandatory for all high-risk AI applications in order to ensure fairness and equal opportunities. Clear, structured, practical.
(Note: This podcast episode was created with the support and structuring of Google's NotebookLM.)