Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/e1/43/a1/e143a140-7348-3128-e262-019cbdd8749d/mza_3453023702255804512.jpg/600x600bb.jpg
Before The Commit
Danny Gershman, Dustin Hilgaertner
18 episodes
1 week ago
AI is writing your code. Who's watching the AI? Before The Commit explores AI coding security, emerging threats, and the trends reshaping software development. Hosts Danny Gershman and Dustin Hilgaertner break down threat models, prompt injection, shadow AI, and practical defenses — drawing from experience across defense, fintech, and enterprise environments. Companion to the book Before The Commit: Securing AI in the Age of Autonomous Code. No hype, just tactical insight for developers, security engineers, and leaders building in the AI era.
Show more...
Technology
RSS
All content for Before The Commit is the property of Danny Gershman, Dustin Hilgaertner and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI is writing your code. Who's watching the AI? Before The Commit explores AI coding security, emerging threats, and the trends reshaping software development. Hosts Danny Gershman and Dustin Hilgaertner break down threat models, prompt injection, shadow AI, and practical defenses — drawing from experience across defense, fintech, and enterprise environments. Companion to the book Before The Commit: Securing AI in the Age of Autonomous Code. No hype, just tactical insight for developers, security engineers, and leaders building in the AI era.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/44033863/44033863-1752004425161-c1ba27a4d2e0e.jpg
Episode 7: LiteLLM
Before The Commit
1 hour 5 minutes 42 seconds
4 months ago
Episode 7: LiteLLM

Hosts Dustin Hillgartner and Danny Gershman discuss securing large language models (LLMs) amid rising "shadow AI" risks, where employees use unmonitored tools like ChatGPT, leading to unintentional data spills (e.g., sensitive info, code). Echoing shadow IT, they stress education, policies, and multi-layered defenses over bans, as prohibition drives underground use—studies show ~40% of workers admit to AI usage despite restrictions.

LightLLM: Open-Source LLM Proxy

Central focus: LightLLM as a tool to combat shadow AI. It's a proxy funneling all LLM calls through a controlled channel, blocking public providers (e.g., forcing use of secure ones like AWS Bedrock GovCloud). Key features:

- Visibility & Tracking: Logs usage, errors, spending per employee/team; identifies high performers needing training.

- Security: Guardrails (WAF-like) scan/ block sensitive data (e.g., API keys, code) before transmission; supports RBAC via virtual keys from secret stores (e.g., AWS/Azure), preventing shared master keys.

- Management: Rate limiting, budgets, load balancing across providers/models; fallbacks if limits hit; RAG integration for team-specific data/models (e.g., support vs. data science).

- Integration: Pipes logs to observability tools; open-source core, enterprise version adds SSO.

Not a silver bullet, but enables safe, company-provided AI to boost productivity without leaks. Encourages "bring your own model" policies with oversight, avoiding moral hazards like unvetted tools exposing IP/HIPAA data. In gov/defense, it ensures FedRAMP compliance.

IDE Exploration: Warp

Brief dive into Warp, a terminal-first AI CLI (vs. code-first like VS Code/Cursor). Competes with Claude Code; runs as standalone app with natural language prompts (e.g., "change directory to X") for bash tasks (Git history, logs, Kubernetes). Adds side panels for coding (rules, autocomplete). Scope spans entire hard drive (powerful for workflows but raises privacy concerns—data sent?). Hosts note it's like an "AI makefile" for your computer, but terminal focus feels secondary for pure coding. Ties to NVIDIA CEO's quip: "English is the new coding language."

AI in Gov Contracting

AI lowers barriers for proposals (e.g., auto-generating 10-page whitepapers), homogenizing responses and flooding SAM.gov. Makes differentiation hard; calls for more human eval (demos, prototypes via OTAs) over paper reviews. Gov should adopt private-sector agility (trials, betas) while maintaining security—less bespoke risk, more platforms.

Coding's Future & Security

Debate: Will coding devolve to English/binary? Source code aids compliance/trust now (static analysis for vulnerabilities), but dynamic testing (fuzzing, WAFs) could mature to make it obsolete. AI as "Play-Doh machine at light speed" needs guardrails to avoid chaos; interim relies on human oversight.

Newz or Noize

- Anthropic Lawsuit: $1.5B class action for training on ~500K pirated copyrighted books from shadow libraries. Publishers seek payouts; signals wave of suits (OpenAI, Grok next?). Reddit sued Anthropic separately in June over data scraping.

- Copyright in AI Era: Fair use debate—reading/learning OK, but mass ingestion for commercial models? Humans can't replicate styles en masse; AI can (e.g., "new Game of Thrones"). Needs evolved laws: license data, monetize via new models (like Napster → streaming). Frequency/scalability challenges enforcement; transformative use key.

- AI in Film: Reconstructing lost 40-min Orson Welles footage (1940s) using old photos/radio + AI.

Before The Commit
AI is writing your code. Who's watching the AI? Before The Commit explores AI coding security, emerging threats, and the trends reshaping software development. Hosts Danny Gershman and Dustin Hilgaertner break down threat models, prompt injection, shadow AI, and practical defenses — drawing from experience across defense, fintech, and enterprise environments. Companion to the book Before The Commit: Securing AI in the Age of Autonomous Code. No hype, just tactical insight for developers, security engineers, and leaders building in the AI era.