In our final episode of 2025, Dave Lewis, global advisory CISO for 1Password, joins Greg Otto to unpack the “access‑trust gap”: the growing mismatch between what employees (and tools like AI assistants) can access at work and what security teams can actually see, verify, and control. Dav explains how this gap shows up in everyday ways—logins that bypass intended controls, personal devices used for work, and teams adopting apps or AI tools faster than IT can govern them—and why that combination creates quiet but serious risk. You’ll hear practical advice on narrowing the gap with stronger identity checks, smarter device trust, cleaner SaaS governance, and simple guardrails for safe AI use that don’t crush productivity.
All content for Safe Mode Podcast is the property of Safe Mode Podcast and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In our final episode of 2025, Dave Lewis, global advisory CISO for 1Password, joins Greg Otto to unpack the “access‑trust gap”: the growing mismatch between what employees (and tools like AI assistants) can access at work and what security teams can actually see, verify, and control. Dav explains how this gap shows up in everyday ways—logins that bypass intended controls, personal devices used for work, and teams adopting apps or AI tools faster than IT can govern them—and why that combination creates quiet but serious risk. You’ll hear practical advice on narrowing the gap with stronger identity checks, smarter device trust, cleaner SaaS governance, and simple guardrails for safe AI use that don’t crush productivity.
Veracode’s Chris Wysopal on the security issues with AI code development
Safe Mode Podcast
32 minutes 14 seconds
3 months ago
Veracode’s Chris Wysopal on the security issues with AI code development
On this episode of Safe Mode, we’re joined by a renowned cybersecurity expert and CyberScoop 50 winner, Veracode co-founder and CTO Chris Wysopal, to discuss the fast-evolving landscape of AI-assisted software development. Chris shares insights from a recent study examining over 100 large language models and their tendency to introduce security vulnerabilities in generated code. The conversation delves into why a staggering 45% of AI-generated code samples contained vulnerabilities and why improvements in AI reasoning haven’t translated to more secure outputs. Chris emphasizes the critical need for enhanced security testing and better quality training data, discussing both the challenges and opportunities ahead as AI adoption accelerates. Tune in for a thoughtful exploration of the intersection between AI, secure coding, and what the future holds for developers and enterprises alike.
In our reporter chat, Greg talks with Derek Johnson about work that OpenAI and Anthropic have done with the U.S. and U.K. government to secure their models.
Safe Mode Podcast
In our final episode of 2025, Dave Lewis, global advisory CISO for 1Password, joins Greg Otto to unpack the “access‑trust gap”: the growing mismatch between what employees (and tools like AI assistants) can access at work and what security teams can actually see, verify, and control. Dav explains how this gap shows up in everyday ways—logins that bypass intended controls, personal devices used for work, and teams adopting apps or AI tools faster than IT can govern them—and why that combination creates quiet but serious risk. You’ll hear practical advice on narrowing the gap with stronger identity checks, smarter device trust, cleaner SaaS governance, and simple guardrails for safe AI use that don’t crush productivity.