This podcast chronicles the unprecedented identification and disruption of the "GTG-1002" operation—the first documented case of a high-value cyber espionage campaign driven predominantly by agentic AI.
We explore how a Chinese state-sponsored group achieved a fundamental shift in threat capability by manipulating an advanced language model (Claude Code) to perform nearly autonomous, large-scale intrusions against approximately 30 targets, including major technology corporations and government agencies.
This report reveals the new reality of AI-driven cyber threats and the urgent need for enhanced safeguards against operations that executed 80 to 90 percent of all tactical work independently.
Topics Covered:
The browser wars have entered their most exciting and perhaps most dangerous chapter since 2008, driven by the emergence of AI Browsers like Perplexity’s Comet, OpenAI’s ChatGPT Atlas, and Microsoft’s Copilot Mode. This episode deep-dives into the alarming cybersecurity vulnerabilities arising from these new platforms, especially those featuring powerful AI Agents.
Unlike traditional browsers, AI browsers are much more powerful because they learn from everything, creating a "more invasive profile than ever before," coupled with stored credentials that hackers seek to access.
These AI Agents operate at the user’s same privilege level and can perform automated, agentic workflows like navigating pages, logging into accounts, purchasing tickets, or sending emails. This capability creates a "minefield of new vulnerabilities" and makes the browser the initial access point for sophisticated cyber-attacks.
We explore the fundamental security flaw: Prompt Injection
Case Studies in Catastrophe and Agent Hijacking:
Securing the Next Frontier:
In 2025, the cybersecurity landscape shattered its old paradigm as Artificial Intelligence (AI) moved from a theoretical threat to a force multiplier actively leveraged by adversaries [1-3]. This podcast dives deep into the Adversarial Misuse of Generative AI, analyzing documented activity from government-backed Advanced Persistent Threats (APTs) [4, 5], coordinated Information Operations (IO) actors [6, 7], and financially motivated cybercriminals [8, 9].
We explore how threat actors, including groups linked to Iran, China, and North Korea, are using Large Language Models (LLMs) like Gemini and Claude to augment the entire attack lifecycle [5, 10-14]. These LLMs are providing productivity gains across operations, assisting with research, reconnaissance on target organizations, coding and scripting tasks, payload development, and creating content for social engineering and phishing campaigns [12, 15-22]. The core challenge is the industrialization of the unknown threat [1, 23], where AI accelerates the discovery and weaponization of vulnerabilities, leading to a dramatically compressed timeline from flaw discovery to active deployment [1, 24].
Key topics covered include:
We highlight the urgent need for a proactive, AI-powered defensive strategy to combat this rapidly evolving environment [49-53], recognizing that traditional defenses based on "patching what's known" are no longer sufficient against a deluge of new, AI-accelerated threats [50].
Welcome to "The Cyber Cyber." In this critical episode, we dive into the alarming reality of modern intrusion speed, focusing on the sophisticated methods employed by "the enterprising adversary"—threat actors who are increasingly "efficient, focused, and business-like in their approach".
Drawing on elite global threat intelligence, we analyze the race against time that cyber defenders now face:
Tune in to understand why prioritizing real-time detection, hardening identity controls, and anticipating the adversary's next move are essential strategies for keeping up with threats that move in less than a minute