Modern adversaries are relentless. Today’s threat actors target organizations around the world with sophisticated cyberattacks. Who are they? What are they after? And most importantly, how can you defend against them? Welcome to the Adversary Universe podcast, where CrowdStrike answers all of these questions — and more. Join our hosts, a pioneer in adversary intelligence and a specialist in cybersecurity technology, as they unmask the threat actors targeting your organization.
All content for Adversary Universe Podcast is the property of CrowdStrike and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Modern adversaries are relentless. Today’s threat actors target organizations around the world with sophisticated cyberattacks. Who are they? What are they after? And most importantly, how can you defend against them? Welcome to the Adversary Universe podcast, where CrowdStrike answers all of these questions — and more. Join our hosts, a pioneer in adversary intelligence and a specialist in cybersecurity technology, as they unmask the threat actors targeting your organization.
Prompted to Fail: The Security Risks Lurking in DeepSeek-Generated Code
Adversary Universe Podcast
37 minutes
22 hours ago
Prompted to Fail: The Security Risks Lurking in DeepSeek-Generated Code
CrowdStrike research into AI coding assistants reveals a new, subtle vulnerability surface: When DeepSeek-R1 receives prompts the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security flaws increases by up to 50%.
Stefan Stein, manager of the CrowdStrike Counter Adversary Operations Data Science team, joined Adam and Cristian for a live recording at Fal.Con 2025 to discuss how this project got started, the methodology behind the team’s research, and the significance of their findings.
The research began with a simple question: What are the security risks of using DeepSeek-R1 as a coding assistant? AI coding assistants are commonly used and often have access to sensitive information. Any systemic issue can have a major and far-reaching impact.
It concluded with the discovery that the presence of certain trigger words — such as mentions of Falun Gong, Uyghurs, or Tibet — in DeepSeek-R1 prompts can have severe effects on the quality and security of the code it produces. Unlike most large language model (LLM) security research focused on jailbreaks or prompt injections, this work exposes subtle biases that can lead to real-world vulnerabilities in production systems.
Tune in for a fascinating deep dive into how Stefan and his team explored the biases in DeepSeek-R1, the implications of this research, and what this means for organizations adopting AI.
Adversary Universe Podcast
Modern adversaries are relentless. Today’s threat actors target organizations around the world with sophisticated cyberattacks. Who are they? What are they after? And most importantly, how can you defend against them? Welcome to the Adversary Universe podcast, where CrowdStrike answers all of these questions — and more. Join our hosts, a pioneer in adversary intelligence and a specialist in cybersecurity technology, as they unmask the threat actors targeting your organization.