
đź”— Register for FREE Infosec Webcasts, Anti-casts & Summits –Â
Model Extraction Attacks | Episode 24
In this solo episode of BHIS Presents: AI Security Ops, Brian Fehrman explores the stealthy world of Model Extraction Attacks—where hackers clone your AI model without ever touching your code. Learn how adversaries can reverse-engineer your multimillion-dollar model simply by querying its API, and why this threat is more than just academic.
We break down:
- What model extraction is and how it works
- Real-world examples like DeepSeek’s alleged distillation of OpenAI models
- The risks to intellectual property, security, and sensitive data
- Defensive strategies including API throttling, output limiting, watermarking, and honeypots
- Legal and ethical questions around benchmarking vs. theft
Whether you're deploying LLMs or classification models, this episode will help you understand how attackers replicate model behavior—and what you can do to stop them.
If your AI is accessible, someone’s probably trying to copy it.
#AIsecurity #ModelExtractionAttacks #Cybersecurity #BHIS #LLMsecurity #AIthreats
----------------------------------------------------------------------------------------------
Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/