Michael Moore, CISO for the Secretary of State of Arizona's office, explains how he acts as a virtual CISO for all 15 counties by conducting physical security assessments at election facilities and providing real-time guidance during critical events. His approach treats surprise attacks as learning opportunities that should only work once, immediately sharing adversary infrastructure and TTPs across the entire election community to burn their capabilities. Michael emphasizes that misinformation, disinformation, and malinformation represent converging threat vectors that manifest as both cyber attacks and physical violence, requiring defenders to think beyond traditional security boundaries.
Ryan Murray, CISO for the State of Arizona, shares his Cybersecurity Trinity for AI framework: defend from AI-enabled attacks, defend with AI-augmented tools, and defend the AI systems organizations deploy. He explains how Arizona replicated MS-ISAC functionality through AZ ISAC, enabling 1,000+ government personnel across 200+ entities to share intelligence in real time without requiring mature security programs. Ryan stresses that organizations already generate valuable threat intelligence internally through phishing reports and security alerts, and the real challenge is communication and relationship-building rather than expensive commercial feeds.
Topics discussed:
How physical security gaps at government facilities create tactical vulnerabilities that scale across entire states.
Building sector champion models where election security and critical infrastructure specialists act as virtual CISOs for under-resourced local governments.
Why misinformation, disinformation, and malinformation represent converging cyber, physical, and reputational threat vectors that radicalize populations into kinetic attacks.
Implementing real-time threat intelligence sharing protocols that enable 1,000+ defenders to communicate via platforms like Slack during active incidents.
The evolution from receiving threat intelligence to generating intelligence internally by analyzing phishing campaigns, user reports, and infrastructure scanning patterns.
Applying the "surprise attack only works once" principle by burning adversary infrastructure and TTPs immediately through broad intelligence sharing.
Why the distinction between "intelligence" in national security contexts versus cyber threat intelligence creates executive buy-in challenges.
How to prove negative outcomes and communicate near-miss stories where intelligence prevented catastrophic breaches.
The collapsing patch window problem where automated vulnerability discovery and exploitation eliminates traditional seven-day remediation timelines.
Implementing the Cybersecurity Trinity for AI: defending from AI-enabled attacks, defending with AI-enhanced tools, and defending AI systems from prompt injection and data leakage.
Why secure-by-design pledges fail when financially motivated vendors push defensive responsibility to the least capable organizations.
Building tabletop exercise programs that prepare election officials for denial-of-service attacks disguised as physical threats.
How generative AI enables Script Kitty 2.0, where non-technical adversaries automate reconnaissance, exploitation, and data exfiltration through natural language prompts.
The challenge of deepfakes and synthetic media targeting sub-national officials who lack the visibility and resources for sophisticated reputation defense.
Key Takeaways:
Build sector champion programs where specialists act as virtual CISOs for under-resourced entities.
Implement real-time communication platforms like Slack that enable defenders to share threat indicators during active incidents.
Generate internal threat intelligence by systematically analyzing phishing campaigns, tracking top recipients, subject lines, and infrastructure patterns.
Apply the principle that surprise attacks should only work once by immediately burning adversary infr
Show more...