Home
Categories
EXPLORE
Society & Culture
Comedy
History
News
Sports
True Crime
Business
About Us
Contact Us
Copyright
Β© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/b1/34/c6/b134c6ba-c194-57f8-0221-3a3b793e86ca/mza_3056458669303344960.jpg/600x600bb.jpg
Human in loop podcasts
Priti Y.
16 episodes
1 week ago
Educational podcasts blending human research and AI-generated content, promoting curiosity, critical thinking, and lifelong learning.
Show more...
Technology
RSS
All content for Human in loop podcasts is the property of Priti Y. and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Educational podcasts blending human research and AI-generated content, promoting curiosity, critical thinking, and lifelong learning.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/42043243/42043243-1742003268988-fd280115e408a.jpg
AI on Trial: Decoding the Autophagy Disorder
Human in loop podcasts
6 minutes 14 seconds
5 months ago
AI on Trial: Decoding the Autophagy Disorder

AI on Trial is a special episode of Human in the Loop, where we take a deep dive into Model Autophagy Disorder (MAD)β€”a growing risk in artificial intelligence systems. From feedback loops to synthetic data overload, we unpack how models trained on their own outputs begin to degrade in performance and reliability. With real-world examples, emerging research, and ethical implications, this episode explores what happens when AI starts learning from itselfβ€”and what we can do to prevent it.

πŸ’‘ Whether you're an AI engineer, researcher, or just AI-curious, this episode gives you the tools to recognize, explain, and respond to MAD.

Featured Tool:
Try out the companion tool featured in the episode:
MADGuard – AI Explorer
A lightweight diagnostic app to visualize feedback loops, compare input sources, and score MAD risks.

Read the deeper explainer blog:πŸ”— What Is Model Autophagy Disorder? – Human in Loop Blog
A plain-language breakdown of the research, risks, and terminology.

Other Detection Tools & Frameworks

  • DVC – Data Version Control
    https://dvc.org/

  • Label Studio – Open-Source Data Labeling Tool
    https://labelstud.io/

  • DetectGPT – Classify AI-generated Text
    https://arxiv.org/abs/2301.11305

  • Grover – Neural Fake News Detector (Allen AI)
    https://rowanzellers.com/grover/

  • SynthID – AI Watermarking by DeepMind

References-


  • Alemohammad et al. (2023). Self-Consuming Generative Models Go MADPaper introducing MAD and simulating performance collapse in generative models.πŸ”— arXiv:2307.01850

  • Yang et al. (2024). Model Autophagy Analysis to Explicate Self-consumptionBridges human-AI interaction with MAD dynamics.πŸ”— arXiv:2402.11271

  • UCLA Livescu Initiative – Model Autophagy Disorder (MAD) PortalResearch hub on epistemic risk and feedback loop governance.πŸ”— https://livescu.ucla.edu/model-autophagy-disorder/

  • Earth.com (2024) – Could Generative AI Go MAD and Wreck Internet Data?Reports on future data degradation and the "hall of mirrors" risk.πŸ”— https://www.earth.com/news/could-generative-ai-go-mad-and-wreck-internet-data/

  • New York Times (2023) – Avianca Airline Lawsuit Involving ChatGPT BriefsLegal case where synthetic text led to real-world sanctions.πŸ”— https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html




Human in loop podcasts
Educational podcasts blending human research and AI-generated content, promoting curiosity, critical thinking, and lifelong learning.