Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Sports
TV & Film
History
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/2d/eb/68/2deb68c4-2f22-311e-86b5-86bdde7557d1/mza_13092477901838151059.jpg/600x600bb.jpg
AI Security Ops
Black Hills Information Security
33 episodes
3 hours ago
Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation). Brought to you by the experts at Black Hills Information Security https://blackhillsinfosec.com -------------------------------------------------- About Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/ About Derek Banks - https://blackhillsinfosec.com/team/derek-banks/ About Brian Fehrman - https://blackhillsinfosec.com/team/brian-fehrman/ About Bronwen Aker - https://blackhillsinfosec.com/team/bronwen-aker/ About Ben Bowman - https://blackhillsinfosec.com/team/ben-bowman/
Show more...
Education
News,
Tech News
RSS
All content for AI Security Ops is the property of Black Hills Information Security and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation). Brought to you by the experts at Black Hills Information Security https://blackhillsinfosec.com -------------------------------------------------- About Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/ About Derek Banks - https://blackhillsinfosec.com/team/derek-banks/ About Brian Fehrman - https://blackhillsinfosec.com/team/brian-fehrman/ About Bronwen Aker - https://blackhillsinfosec.com/team/bronwen-aker/ About Ben Bowman - https://blackhillsinfosec.com/team/ben-bowman/
Show more...
Education
News,
Tech News
Episodes (20/33)
AI Security Ops
AI News Stories | Episode 33

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


AI News | Episode 33
In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.

We break down:

  • AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous 
  • Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.
  • Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.
  • Amazon’s private AI bug bounty: Nova models under the microscope.
  • Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.
  • PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.


Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.

⏱️ Chapters

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:27) - AI-Orchestrated Cyber Espionage (Anthropic)
  • (09:54) - KawaiiGPT: Free Black-Hat LLM
  • (18:10) - ShadowMQ: Critical RCE in AI Inference Engines
  • (22:45) - Amazon Nova: Private AI Bug Bounty
  • (26:38) - Google Antigravity IDE Hacked in 24 Hours
  • (31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism


#AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malware

Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


Antisyphon Training

https://www.antisyphontraining.com/


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

Show more...
1 day ago
37 minutes

AI Security Ops
Model Evasion Attacks | Episode 32

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com

Model Evasion Attacks | Episode 32
In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.

We break down:
- What model evasion attacks are and how they differ from data poisoning
- How attackers tweak features to bypass classifiers (images, phishing, malware)
- Real-world tactics like model extraction and trial-and-error evasion
- Why non-determinism in AI models makes evasion harder to predict
- Advanced threats: model theft, ablation, and adversarial AI
- Defensive strategies: adversarial training, API throttling, and realistic expectations
- Future outlook: regulatory trends, transparency, and the ongoing arms race

Whether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.


#AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreats


Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:19) - What Are Model Evasion Attacks?
  • (03:58) - Image Classifiers & Pixel Tweaks
  • (07:01) - Malware Classification & Decision Boundaries
  • (10:02) - Model Theft & Extraction Attacks
  • (13:16) - Non-Determinism & Myth Busting
  • (16:07) - AI in Offensive Capabilities
  • (17:36) - Defensive Strategies & Adversarial Training
  • (20:54) - Vendor Questions & Transparency
  • (23:22) - Future Outlook & Regulatory Trends
  • (25:54) - Panel Takeaways & Closing Thoughts
Show more...
1 week ago
28 minutes

AI Security Ops
Data Poisoning | Episode 31

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Data Poisoning Attacks | Episode 31
In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.

We break down:

  • What data poisoning is and why it matters
  • How attackers inject malicious samples or flip labels in training sets
  • The role of open-source repositories like Hugging Face in supply chain risk
  • New twists for LLMs: poisoning via reinforcement feedback and RAG
  • Real-world concerns like bias in ChatGPT and malicious model uploads
  • Defensive strategies: governance, provenance, versioning, and security assessments


Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.


#aisecurity  #DataPoisoning #Cybersecurity #BHIS #llmsecurity  #aithreats


Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:19) - What Is Data Poisoning?
  • (03:58) - Poisoning Classifier Models
  • (08:10) - Risks in Open-Source Data Sets
  • (12:30) - LLM-Specific Poisoning Vectors
  • (17:04) - RAG and Context Injection
  • (21:25) - Realistic Threats & Examples
  • (25:48) - Defensive Strategies & Governance
  • (28:27) - Panel Takeaways & Closing Thoughts
Show more...
2 weeks ago
31 minutes

AI Security Ops
AI News Stories | Episode 30

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


AI News Stories | Episode 30
In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.

Topics Covered:

Only 5% of Americans are unaware of AI?
What Pew Research reveals about AI’s penetration into everyday life and workplace usage.
AI’s Shift to the Intimacy Economy – Project Liberty
https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1 

Amazon to Cut Jobs and Invest in AI Infrastructure
14,000 corporate roles eliminated—are layoffs really about efficiency or something else?
Amazon to Cut Jobs & Invest in AI – DW
https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365

Local Models Less Secure than Cloud Providers?
Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.
Local LLMs Security Paradox – Quesma
https://quesma.com/blog/local-llms-security-paradox

Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.

Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:07) - AI’s Shift to the Intimacy Economy (Pew Research)
  • (19:40) - Amazon Layoffs & AI Investment
  • (27:00) - Local LLM Security Paradox
  • (36:32) - Wrap-Up & Key Takeaways
Show more...
3 weeks ago
37 minutes

AI Security Ops
A Conversation with Dr. Colin Shea-Blymyer | Episode 29

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com

A Conversation with Dr. Colin Shea-Blymyer  | Episode 29

In this episode of BHIS Presents: AI Security Ops, the panel welcomes Dr. Colin Shea-Blymyer for a deep dive into the intersection of AI governance, cybersecurity, and red teaming. From the historical roots of neural networks to today’s regulatory patchwork, we explore how policy, security, and innovation collide in the age of AI. Expect candid insights on emerging risks, open models, and why defining your risk appetite matters more than ever.

Topics Covered:

  • AI governance vs. innovation: U.S. vs. EU regulatory approaches
  • The evolution of neural networks and lessons from AI history
  • AI red teaming: definitions, methodologies, and data-sharing challenges
  • Safety vs. security: where they overlap and diverge
  • Emerging risks: supply chain vulnerabilities, prompt injection, and poisoned data
  • Open weights vs. closed models: implications for research and security
  • Practical takeaways for organizations navigating AI uncertainty


About the Panel:
Joff Thyer, Dr. Brian Fehrman, Derek Banks
Guest Panelist: Dr. Colin Shea-Blymyer
https://cset.georgetown.edu/staff/colin-shea-blymyer/

#aisecurity  #aigovernance  #cyberrisk  #AIredteam #OpenModels #aipolicy  #BHIS #AIthreats #aiincybersecurity  #llmsecurity


Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Guest Welcome
  • (02:14) - Colin’s Journey: From CS to AI Governance
  • (06:33) - Lessons from AI History & Neural Network Origins
  • (10:28) - AI Red Teaming: Definitions & Methodologies
  • (15:11) - Safety vs. Security: Where They Intersect
  • (22:47) - Regulatory Landscape: U.S. Patchwork vs. EU AI Act
  • (33:42) - Open Models Debate: Risks & Research Benefits
  • (38:19) - Emerging Threats & Supply Chain Risks
  • (44:06) - Practical Takeaways & Closing Thoughts
Show more...
4 weeks ago
46 minutes

AI Security Ops
Questions from the Community | Episode 28

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


AI News Stories | Episode 28 – Questions from the Community
In this episode of BHIS Presents: AI Security Ops, the panel tackles real questions from the community, diving deep into the practical, ethical, and technical challenges of AI in cybersecurity. From red teaming tools to prompt privacy, this Q&A session delivers candid insights and actionable advice for professionals navigating the AI-infused threat landscape.

🧠 Topics Covered:

  • Open-source tools for LLM red teaming
  • Threat modeling AI systems (STRIDE methodology)
  • Hallucination rates in frontier vs. local models
  • Prompt privacy: what’s stored, what’s shared
  • Should red teamers disclose AI usage?
  • Human-in-the-loop: AI-generated deliverables
  • Whether you're a pentester, SOC analyst, or just curious about how AI is reshaping offensive security, this episode is packed with expert perspectives and practical takeaways.


About the Panel:
Brian Fehrman, Derek Banks, Joff Thyer


Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:14) - Recommended Tools for LLM Red Teaming
  • (06:12) - Threat Modeling AI Systems
  • (09:58) - Which Models Hallucinate Most?
  • (17:13) - Prompt Privacy: What You Should Know
  • (22:54) - Should Red Teamers Disclose AI Usage?
  • (27:01) - Final Thoughts & Wrap-Up
Show more...
1 month ago
28 minutes

AI Security Ops
Azure AI Foundry Guardrails | Episode 27

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Azure AI Foundry Guardrails | Episode 27

In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.

Topics Covered:

  •  Changing default filters for demo compliance
  •  Setting up a system prompt and understanding its role
  •  Adding regex terms to block specific content
  •  Creating and configuring a custom filter: “tech demo guardrails”
  •  Input-side filtering: inspecting user text before model access
  •  Safety vs. security categories in filtering
  •  Enabling prompt shields for indirect jailbreak detection


This video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.


Why This Matters
By implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.

#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurity

Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Introduction & Overview
  • (01:17) - Changing the Default Content Filter for Demo Compliance
  • (02:00) - Setting Up a System Prompt and Its Purpose
  • (04:26) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)
  • (05:04) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”
  • (05:35) - How Input-Side Filters Inspect and Block Unwanted Content
  • (06:01) - Overview of Safety Categories vs. Security Categories
  • (07:15) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)
  • (08:30) - Summary & Next Steps
Show more...
1 month ago
15 minutes

AI Security Ops
Questions from the Community | Episode 26

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Questions from the Community | Episode 26
In this community-driven episode of BHIS Presents: AI Security Ops, the panel answers real questions from viewers about AI security, privacy, and risk. Featuring Brian Fehrman, Bronwen Aker, Jack Verrier, and Joff Thyer, the team dives into everything from guardrails and hallucinations to GDPR, agentic AI, and how to stay safe in an AI-saturated world.

💬 Topics include:

  • Are guardrails enough to protect sensitive prompts?
  • What’s the difference between hallucination and confabulation?
  • How does AI intersect with GDPR and the right to be forgotten?
  • What does it mean to “stay safe” when using AI?
  • How is securing AI different from traditional software?


Whether you're a red teamer, SOC analyst, or just trying to navigate the AI landscape, this episode offers practical insights and thoughtful perspectives from seasoned security professionals.

Panelists:
🔹 Brian Fehrman
🔹 Bronwen Aker
🔹 Jack Verrier
🔹 Joff Thyer
#AIsecurity #Cybersecurity #PromptInjection #LLMs #BHIS #AIprivacy #AgenticAI #AIandGDPR

Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Panel Welcome
  • (01:22) - Are Guardrails Enough to Protect System Prompts?
  • (09:54) - Explaining Hallucination vs. Confabulation
  • (20:09) - AI and GDPR: The Right to Be Forgotten?
  • (23:49) - How Do We Stay Safe Using AI?
  • (32:26) - Securing AI vs. Traditional Software
  • (37:18) - Final Thoughts & Wrap-Up
Show more...
1 month ago
37 minutes

AI Security Ops
AI News Stories | Episode 25

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


AI News Stories | Episode 25
In this episode of BHIS Presents: AI Security Ops, the panel dives into the biggest AI cybersecurity headlines from late September 2025. From government regulation to zero-click exploits, we unpack the risks, trends, and implications for security professionals navigating the AI-powered future.

🧠 Topics Covered:

  • Government oversight of advanced AI systems
  • Accenture’s massive layoffs amid AI pivot
  • ShadowLeak: zero-click vulnerability in ChatGPT agents
  • Malicious MCP server stealing emails
  • AI in the SOC: benefits and risks
  • Attackers using AI to scale ransomware and social engineering


Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.


Brought to you by Black Hills Information Security 

https://www.blackhillsinfosec.com

----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (00:45) - Senators Introduce AI Risk Evaluation Act
  • (09:48) - Accenture Layoffs & AI Restructuring
  • (16:17) - ShadowLeak: Zero-Click Vulnerability in ChatGPT
  • (20:07) - Malicious MCP Server & Supply Chain Risks
  • (26:27) - AI in the SOC: Alert Triage & Analyst Burnout
  • (30:10) - Final Thoughts: AI’s Role in Security Operations
Show more...
1 month ago
31 minutes

AI Security Ops
Model Extraction Attacks | Episode 24

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Model Extraction Attacks | Episode 24
In this solo episode of BHIS Presents: AI Security Ops, Brian Fehrman explores the stealthy world of Model Extraction Attacks—where hackers clone your AI model without ever touching your code. Learn how adversaries can reverse-engineer your multimillion-dollar model simply by querying its API, and why this threat is more than just academic.

We break down:
- What model extraction is and how it works
- Real-world examples like DeepSeek’s alleged distillation of OpenAI models
- The risks to intellectual property, security, and sensitive data
- Defensive strategies including API throttling, output limiting, watermarking, and honeypots
- Legal and ethical questions around benchmarking vs. theft

Whether you're deploying LLMs or classification models, this episode will help you understand how attackers replicate model behavior—and what you can do to stop them.
If your AI is accessible, someone’s probably trying to copy it.


#AIsecurity #ModelExtractionAttacks #Cybersecurity #BHIS #LLMsecurity #AIthreats


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:19) - What Is a Model Extraction Attack?
  • (02:45) - Why Training a Model Is So Expensive
  • (05:42) - How Model Extraction Works
  • (07:11) - Why It Matters: IP, Security & Data Risks
  • (10:25) - What Makes Extraction Easier or Harder
  • (12:54) - Defenses: Monitoring, Watermarking & Privacy
  • (16:04) - What to Do If You Suspect an Attack
  • (16:29) - Legal & Ethical Questions Around Model Theft
  • (19:30) - Final Thoughts & Takeaways
Show more...
2 months ago
19 minutes

AI Security Ops
News of the Month | Episode 23

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com



In this episode of AI Security Ops, Brian Fehrman and Joff Thyer dive into the latest AI news of the month, exploring how rapidly evolving technologies are reshaping cybersecurity.
Topics covered include:
 - How AI is changing cybersecurity monitoring
 - Expanding from email to Slack, Teams, and other chat platforms
 - Addressing insider threats and phishing campaigns in new channels
 - The rapid pace of AI innovation and industry trends
 - Why organizations should prioritize AI security assessments
 - Real-world risks and opportunities in the AI landscape

Stay ahead in the AI race with Black Hills Information Security as we cover real-world risks, opportunities, and the latest developments in the AI landscape.


///News Stories This Episode:

1. AI-Powered Villager Pen Testing Tool Hits 11,000 PyPI Downloads Amid Abuse Concerns
https://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html

2. CrowdStrike and Meta Just Made Evaluating AI Security Tools Easier
https://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/

3. Check Point Acquires Lakera to Deliver End-to-End AI Security for Enterprises
https://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/

4. Proofpoint Offers AI Agents to Monitor Human-Based Communications
https://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications

5. EvilAI Malware Campaign Exploits AI-Generated Code to Breach Global Critical Sectors
https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/

----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

Show more...
2 months ago
34 minutes

AI Security Ops
Insider Threat 2.0 - Prompt Leaks & Shadow AI | Episode 22

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Insider Threat 2.0 -  Prompt Leaks & Shadow AI | Episode 22

In this episode of BHIS Presents AI Security Ops, we dive into Insider Threat 2.0: Prompt Leaks & Shadow AI. The panel explores the hidden risks of employees pasting sensitive data into public AI tools, the rise of unauthorized “Shadow AI” in organizations, and how policies—or lack thereof—can expose critical information. Learn why free AI services often make you the product, how prompt history creates data leakage risks, and why companies must establish clear AI usage guidelines. We also cover practical defenses, from enterprise AI accounts to cultural awareness training, and draw parallels to past IT challenges like Shadow IT and rogue wireless.
If you’re concerned about AI security, data leakage, or safe adoption of large language models, this discussion will help you navigate the risks and protect your organization.

#AIsecurity #PromptInjection #ShadowAI #Cybersecurity #BHIS


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

Show more...
2 months ago
25 minutes

AI Security Ops
Deepfakes and Fraudulent Interviews In Remote Hiring | Episode 21

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Episode 21 - Deepfakes And Fraudulent Interviews In Remote Hiring


In this episode of AI Security Ops by Black Hills Information Security, the crew explores the alarming rise of deepfakes and fraudulent interviews in remote hiring. As virtual work expands, cybercriminals are using AI-driven impersonation tactics to pose as job candidates, deceive recruiters, and gain unauthorized access to organizations. Joff, Bronwen Aker, Brian Fehrman, and Derek Banks break down real-world cases, explain the challenges of spotting deepfake job scams, and share actionable strategies to secure hiring processes. Discover the red flags to watch for in virtual interviews, how attackers exploit trust, and why companies must adapt their security awareness in the age of AI.


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

Show more...
2 months ago
28 minutes

AI Security Ops
The Hallucination Problem | Episode 20

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Episode 20 - The Hallucination Problem


In this episode of AI Security Ops, Joff Thyer and Brian Fehrman from Black Hills Information Security dive into the hallucination problem in AI large language models and generative AI. 


They explain what hallucinations are, why they happen, and the risks they create in real-world AI deployments. The discussion covers security implications, practical examples, and strategies organizations can use to mitigate these issues through stronger design, monitoring, and testing. 


A must-watch for cybersecurity professionals, AI researchers, and anyone curious about the limitations and challenges of modern AI systems.


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

Show more...
3 months ago
26 minutes

AI Security Ops
News of the Month | Episode 19

Register for FREE Infosec Webcasts, Anti-casts & Summits –
https://poweredbybhis.com

AI News of the Month | Episode 19

In Episode 19,Brianand Derek cover a zero-click indirect prompt injection attack against ChatGPT connectors and seemingly innocent Google Calendar events that hijack smart homes via Gemini, with possible consequences for the power grid.

They'll discuss the impact of Microsoft patching a critical Azure OpenAI SSRF vulnerability and go over new NIST AI security standards, IBM’s study on shadow AI and breach costs, OpenAI’s response to chat indexing leaks, and a malicious VS Code extension that stole $500K in cryptocurrency. 

#AI #CyberSecurity #PromptInjection #Malware #InfoSec #AIThreats #Hacking #GenerativeAI #Deepfakes #LLM #ShadowAI

  • “Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer) — Aug 6, 2025
    • Primary: https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/
    • Tech write-up: https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41


  • Poisoned Google Calendar invite hijacks Gemini to control a smart home — Aug 6–10, 2025
    • Primary: https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/
    • Bug/patch coverage: https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/


  • Microsoft August Patch Tuesday adds AI-surface fixes; critical Azure OpenAI vuln (CVE-2025-53767) — Aug 12–13, 2025
    • Release coverage: https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-now
    • CVE entry: https://nvd.nist.gov/vuln/detail/CVE-2025-53767 (NVD)
    • Overview: https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779 (Tenable®)


  • NIST proposes SP 800-53 “Control Overlays for Securing AI Systems” — Aug 14, 2025
    • Announcement: https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paper
    • Concept paper (PDF): https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdf


  • IBM 2025 “Cost of a Data Breach”: AI is both breach vector and defender — Jul 30, 2025
    • Press release: https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controls
    • Report: https://www.ibm.com/reports/data-breach
    • Analysis: https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/ (VentureBeat)


  • OpenAI considers encrypting Temporary Chats; privacy clean-ups after search-indexing scare — Aug 18, 2025
    • Interview: https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chats
    • Context: https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/
    • Help center (retention): https://help.openai.com/en/articles/8914046-temporary-chat-faq


  • Fake VS Code extension for Cursor leads to $500K crypto theft — July 11, 2025
    • Primary: https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft SC Media
    • Research write-up: https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/Securelist
    • Coverage: https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/


----------------------------------------------------------------------------------------------
Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro
  • (00:31) - “Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer)
  • (01:15) - A zero-click prompt injection
  • (02:12) - url_safe bypassed using URLs from Microsoft’s Azure Blob cloud storage
  • (07:08) - Poisoned Google Calendar invite hijacks Gemini to control a smart home
  • (08:35) - The intersection of AI and IOT
  • (09:53) - Be careful what you hook AI up to
  • (10:23) - Derek warns of threat to power grid
  • (11:54) - Mitigations - restrict permissions, sanitize calendar content
  • (13:56) - Patch Tuesday - AI-surface fixes; critical Azure OpenAI vuln
  • (15:49) - NIST proposes SP 800-53 “Control Overlays for Securing AI Systems”
  • (18:43) - IBM “Cost of a Data Breach”: AI is both breach vector and defender
  • (19:16) - Shadow AI
  • (21:49) - “The AI adoption curve is outpacing controls”
  • (23:02) - OpenAI considers encrypting Temporary Chats
  • (26:39) - Data storage and logging LLM interactions
  • (29:59) - Fake VS Code extension for Cursor leads to $500K crypto theft
  • (30:37) - Danger of using pip install as root on a server
Show more...
3 months ago
37 minutes

AI Security Ops
Malware in the Age of AI | EP 18

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Malware in the Age of AI | Episode 18

In Episode 18, hosts Joff Thyer, Derek Banks and Brian Fehrman discuss the rise of AI-powered malware. From polymorphic keyloggers like Black Mamba to the use of ChatGPT, WormGPT, and fine-tuned LLMs for cyberattacks, the team will explain how generative AI is reshaping the security landscape.

They'll break down the real risks vs. hype, including prompt injection, jailbreaking, deepfakes, and AI-driven fraud, while also sharing strategies defenders can use to fight back.

The discussion highlights both the ethical implications and the critical need for defense-in-depth as threat actors use AI to accelerate their attacks.


#AI #Cybersecurity #Malware #AIThreats #Deepfakes #LLM #InfoSec #AIinSecurity #GenerativeAI #Hacking


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro
  • (01:15) - Black Mamba polymorphic AI keylogger
  • (02:47) - Can Chat GPT5 generate malware for us?
  • (03:42) - Guardrail circumvention technique #1
  • (04:16) - Guardrail circumvention technique #2
  • (05:30) - Guardrail circumvention technique #3
  • (05:59) - Guardrail circumvention technique #4
  • (06:30) - Using an Abliterated Model
  • (08:32) - AI models have democratized software creation
  • (11:20) - Polymorphic keyloggers are not new
  • (12:03) - AI makes it faster to iterate polymorphic malware
  • (12:33) - AI is able to analyze source code and find more vulnerabilities
  • (15:16) - How scared should we be? (hype vs reality)
  • (16:10) - Knowing enough to ask the right questions is important
  • (17:41) - Significant risks of AI fraud and social engineering
  • (19:32) - Business email compromise
  • (21:10) - How defenders can use AI
  • (24:28) - Audio deepfakes have become easier to create
  • (25:06) - Ethical concerns for pentesters using AI
  • (29:26) - In one sentence, how will AI change malware production in the near future?
Show more...
3 months ago
32 minutes

AI Security Ops
Community Q&A | Episode 17

Register for FREE Infosec Webcasts, Anti-casts & Summits –
https://poweredbybhis.com

Community Q&A | Episode 17

In episode 17 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, Brian Fehrman and Bronwen Aker answer viewer-submitted questions about system prompts, prompt injection risks, AI hallucinations, deep fakes, and when (and when not) to use AI in cybersecurity. 

They'll discuss the difference between system and user prompts, how temperature settings impact LLM outputs, and the biggest mistakes companies make when deploying AI models. 

They'll also explain how to reduce hallucinations, and approach AI responsibly in security workflows. Derek explains his method for detecting audio deep fakes.

----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro
  • (01:10) - What is a system prompt? How is it different from a user prompt?
  • (03:35) - What are some common system prompt mistakes?
  • (06:54) - Does repeating a prompt give different responses? (non-deterministic)
  • (07:56) - The temperature knob effect
  • (12:18) - When should I use AI? When should I not?
  • (16:47) - What are best practices to reduce hallucinations?
  • (20:29) - End-user temperature knob work-around
  • (22:55) - AI bots that rewrite their code to avoid shutdown commands
  • (26:53) - NCSL.org - Updates on legislation affecting AI
  • (29:44) - How do we detect AI deep fakes?
  • (30:00) - Derek’s DeepFake demo video
  • (30:38) - DISCLAIMER - Do Not use AI deep fakes to break the law!
  • (31:29) - F5-tts.org - Deep fake website
  • (35:02) - Derek pranks his family using AI
Show more...
3 months ago
37 minutes

AI Security Ops
A Conversation with Daniel Miessler | Episode 16

A Conversation with Daniel Miessler

In Episode 16, Joff and the team welcome human-centric AI innovator Daniel Miessler, creator of Fabric, an AI framework for solving real-world problems from a human perspective.

The conversation covers AI’s role in cybersecurity, the importance of clarity in “intent engineering” over prompt tricks, and the risks and opportunities of deploying large language models. They explore the shift from “vibe coding” to “spec coding,” the rise of AI scaffolding over raw model improvements, and what AI advancements including GPT-5 mean for the future of knowledge work.


"Introducing Fabric — A Human AI Augmentation Framework"
https://www.youtube.com/watch?v=wPEyyigh10g

Daniel's GitHub repository:
https://github.com/danielmiessler/Fabric


#AI #CyberSecurity #AgenticAI #SecurityOps #PromptEngineering

Show more...
4 months ago
44 minutes

AI Security Ops
News of the Month – Episode 15

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


In this episode, we'll discuss Palo Alto Networks’ acquisition of Protect AI, the rise of “Shadow AI” in enterprises, alarming AI-driven data leaks, and vibe coding gone wrong. We'll dive into critical issues like AI hallucinations and the growing need for "human in the loop" oversight. We'll wrap up with a discussion of Proton’s Lumo AI chatbot, disappearing medical disclaimers in AI chatbots and data poisoning in Amazon's AI coding agent.


#AI #Cybersecurity #LLM #AInews #AISecurityOps #BlackHillsInfosec #LLMGuard #ShadowAI #DataLeak #AgenticAI #PrivacyTech #VibeCoding #ProtectAI




00:00 - Welcome, Intro

00:58 - Palo Alto Networks Completes Acquisition of Protect AI

https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-completes-acquisition-of-protect-ai

04:53 - Metomic Finds AI Data Leaks Impact 68% of Organizations, But Only 23% Have Proper AI Data Security Policies 

https://www.metomic.io/resource-centre/metomic-finds-ai-data-leaks-impact-68-of-organizations-but-only-23-have-proper-ai-data-security-policies

09:46 - S&P 500’s AI adoption may invite data breaches, new research shows

https://cybernews.com/security/sp-500-companies-ai-security-risks-report/

12:53 - Vibe Coding Fiasco: AI Agent Goes Rogue, Deletes Company's Entire Database

https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database

18:47 - A major AI training data set contains millions of examples of personal data

https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/

23:34 - Introducing Lumo, the AI where every conversation is confidential

https://proton.me/blog/lumo-ai

28:56 - AI companies have stopped warning you that their chatbots aren’t doctors

https://www.technologyreview.com/2025/07/21/1120522/ai-companies-have-stopped-warning-you-that-their-chatbots-arent-doctors/

36:53 - Hacker Plants Computer 'Wiping' Commands in Amazon's AI Coding Agent

https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent/

Show more...
4 months ago
39 minutes

AI Security Ops
Questions From The Community podcast – Episode 14

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com

In Episode 14 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman answer questions submitted by viewers. 

The team will cover how effective prompt engineering can transform LLMs into workflow accelerators, and debate AI tool strengths— when to use Claude, ChatGPT, or Notebook LM.

They'll discuss the importance of human oversight when integrating AI into operations, highlighting the "human-in-the-loop" concept and include ways to explain AI to non-technical audiences.


#AI #promptengineering #CyberSecurity #Automation #SecurityOps #claudeai #chatgpt 


00:00 - Welcome, Intro

02:00 - Q - How do you use AI?

02:55 - The importance of effective prompt engineering

10:24 - Upcoming workshop - AI Workflow Optimization for Red Teaming

12:10 - Q - Which AI for which task? Where should I invest my time?

14:12 - Claude for coding in Python & Golang, but not great at Java

16:35 - Derek - Initial prompt improvement in Chat GPT, then go to Claude

17:37 - NotebookLM for students (https://notebooklm.google/)

20:01 - Invest your time in prompt engineering - applicable to any model

22:38 - Double check code, understand what it means, do not blindly trust AI output

25:17 - Q - How to discuss AI with a non-technical audience

28:08 - Talk to LLMs like a child

28:54 - AI is not sentient, it's just drawing relevant correlations

31:48 - Ask them clarifying questions - what are they trying to ask? What's the context?

33:37 - Q - How can you do "Human in the Loop?"

35:24 - Don't give your agentic AI too much power - treat it like a junior assistant

Show more...
4 months ago
38 minutes

AI Security Ops
Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation). Brought to you by the experts at Black Hills Information Security https://blackhillsinfosec.com -------------------------------------------------- About Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/ About Derek Banks - https://blackhillsinfosec.com/team/derek-banks/ About Brian Fehrman - https://blackhillsinfosec.com/team/brian-fehrman/ About Bronwen Aker - https://blackhillsinfosec.com/team/bronwen-aker/ About Ben Bowman - https://blackhillsinfosec.com/team/ben-bowman/