Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
History
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/2e/27/4f/2e274f4d-9d21-cf54-07e4-410ecf76503a/mza_8949160798761977296.jpg/600x600bb.jpg
The AI Security Podcast
Harriet Farlow (HarrietHacks)
48 episodes
1 day ago

I missed the boat in computer hacking so now I hack AI instead. This podcast discusses all things at the intersection of AI and security. Hosted by me (Harriet Farlow aka. HarrietHacks) and Tania Sadhani and supported by Mileva Security Labs. 

Chat with Mileva Security Labs for your AI Security training and advisory needs: https://milevalabs.com/

Reach out to HarrietHacks if you want us to speak at your event: https://www.harriethacks.com/ 

Show more...
Technology
RSS
All content for The AI Security Podcast is the property of Harriet Farlow (HarrietHacks) and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.

I missed the boat in computer hacking so now I hack AI instead. This podcast discusses all things at the intersection of AI and security. Hosted by me (Harriet Farlow aka. HarrietHacks) and Tania Sadhani and supported by Mileva Security Labs. 

Chat with Mileva Security Labs for your AI Security training and advisory needs: https://milevalabs.com/

Reach out to HarrietHacks if you want us to speak at your event: https://www.harriethacks.com/ 

Show more...
Technology
Episodes (20/48)
The AI Security Podcast
Agentic AI Security | case studies by Microsoft, OWASP

As promised, I’m back with Tania for a deep dive into the wild world of agentic AI security — how modern AI agents break, misbehave, or get exploited, and what real case studies are teaching us.

We’re unpacking insights from the Taxonomy of Failure Modes in Agentic AI Systems, the core paper behind today’s discussion, and exploring what these failures look like in practice.

We also break down three great resources shaping the conversation right now:

Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems — a super clear breakdown of how agent failures emerge across planning, decision-making, and action loops: https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Taxonomy-of-Failure-Mode-in-Agentic-AI-Systems-Whitepaper.pdf

OWASP’s Agentic AI Threats & Mitigations — a practical, security-team-friendly guide to common attack paths and how to defend against them: https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/

Unit 42’s Agentic AI Threats report — real-world examples of adversarial prompting, privilege escalation, and chain-of-trust issues showing up in deployed systems: https://unit42.paloaltonetworks.com/agentic-ai-threats/

Join us as we translate the research, sift through what’s real vs. hype, and talk about what teams should be preparing for next 🚨🛡️.

Show more...
1 day ago
32 minutes 34 seconds

The AI Security Podcast
a hacky christmas message

A quick end-of-year message to say thanks. Thanks for being part of the channel this year — whether you’ve been watching quietly, sharing, or arguing with me in the comments. I really appreciate it.I hope you have a good Christmas and holiday period, whatever that looks like for you. Take a break if you can. See you in 2026.

Show more...
2 weeks ago
3 minutes 43 seconds

The AI Security Podcast
Three Black Hat talks at just 18! My interview with Bandana Kaur.

In this episode, I’m joined by Bandana Kaur — a cybersecurity researcher, speaker, and all-round superstar who somehow managed to do in her teens what most people are still figuring out in their thirties. 🤔

Bandana is just 18 years old, entirely self-taught in cybersecurity, already working in the field, and recently gave three talks at Black Hat. Yes, three! 😱

We talk about how she taught herself cybersecurity as a teenager, how she broke into the industry without a traditional pathway, and what it’s actually like being young (and very competent) in a field that still struggles with gatekeeping. Bandana shares what she focused on while learning, how she approached opportunities like conference speaking, and what she thinks matters most for people trying to get into security today.

This conversation is part career advice, and part reminder that you don’t need permission — or a perfectly linear path — to do meaningful work in cybersecurity.


Follow Bandana: @hackwithher

Show more...
2 weeks ago
12 minutes 45 seconds

The AI Security Podcast
Effective Altruism and AI with Good Ancestors CEO Greg Sadler | part 2

Remember that time I invited myself over to Greg's place with my camera? This is part 2 from that great conversation. I'm curious to hear whether you've heard a lot about EA? It's something really big in the AI world but I'm conscious a lot of people outside the bubble haven't heard of it. Let me know in the comments! Check out Greg's work here: https://www.goodancestors.org.au/

MIT AI Risk Repository: https://airisk.mit.edu/

The Life You Can Save (book): https://www.thelifeyoucansave.org/book/

80,000 hours: https://80000hours.org/

Learn more about AI capability and impacts: https://bluedot.org/

Show more...
3 weeks ago
31 minutes 28 seconds

The AI Security Podcast
AI Safety with CEO of Good Ancestors Greg Sadler | part 1

This week I invited myself over to Greg Sadler's place, the CEO of Good Ancestors, about AI safety. I brought sushi but I didn't have lunch so I ate most of it, and then I almost made him late for his next meeting. We specifically chat through AI capabilities, his work in policy, and building a not-profit. Greg is the kind of person who is so smart and cool that I feel like an absolute dummy interviewing him - so I know you're all going to like this episode. Stay tuned for part 2 where we dive into effective altruism and its intersection with AI!


Check out Greg's work here: https://www.goodancestors.org.au/

MIT AI Risk Repository: https://airisk.mit.edu/

The Life You Can Save (book): https://www.thelifeyoucansave.org/book/

80,000 hours: https://80000hours.org/

Learn more about AI capability and impacts: https://bluedot.org/

Show more...
4 weeks ago
27 minutes 53 seconds

The AI Security Podcast
The United States AI Action Plan | will they win the AI race against China? 🤔

Hi! 👋 In this episode, we’re diving into the US AI Action Plan — the White House’s new roadmap for how America plans to lead in AI.. and beat China.We’ll look at what’s inside the plan, what it really means for AI security and regulation, and whether it’s more of a policy promise… or a political one.📄 You can read the full plan here:https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdfLet me know what you think — is this the kind of leadership AI needs, or a dangerous framing of AI capability?

Show more...
1 month ago
30 minutes 9 seconds

The AI Security Podcast
AI Security vs Application Security

Welcome back! 👋

After taking a little break to reset and redesign everything behind the scenes, I’m back — and consolidating all my content. This episode is part of both The AI Security Podcast (on Spotify and Apple Podcasts) and my YouTube channel, HarrietHacks — so whether you prefer to listen or watch, you’ll get the same great conversations (and bad jokes) across both platforms.

From now on, I’ll be posting at least fortnightly (with the occasional bonus episode when something big happens… like when I announced the book!).

I’ve been in a few conversations lately where people have tried to convince me that AI Security is just Application Security in disguise. Naturally, I disagree. 🤷‍♀️ So in this episode, we dive into AI Security vs Application Security — how they overlap, where they diverge, and why securing AI systems demands new thinking beyond traditional AppSec.

💌 Sign up for the newsletter: http://eepurl.com/i7RgRM

📘 Pre-order The AI Security Handbook: [link coming soon]

🎥 Watch this episode and more on YouTube: https://www.youtube.com/@HarrietHacks


🔗 Useful Links

SQL Injection Examples (W3Schools): https://www.w3schools.com/sql/sql_injection.asp
Application Security Blog (Medium): https://medium.com/@pixelprecisionengineering1/application-security-appsec-in-cybersecurity-855ad9ce5e5e
Echoleak Zero-Click Copilot Exploit (Dark Reading): https://www.darkreading.com/application-security/researchers-detail-zero-click-copilot-exploit-echoleak
Traditional AppSec vs AI Security (Pillar Security): https://www.pillar.security/blog/traditional-appsec-vs-ai-security-addressing-modern-risks

Show more...
1 month ago
30 minutes 22 seconds

The AI Security Podcast
Agentic AI Security: A Primer

For a while we've been wanting to talk about Agentic AI Security.. the thing is that we could spend multiple episodes talking about it! So we decided to do just that. This is part 1 - a primer - where we talk about exactly what AI agents are and why we may need to consider their security a bit differently. Stay tuned for the rest of the series!

Show more...
4 months ago
19 minutes 2 seconds

The AI Security Podcast
How Likely Are AI Security Incidents? Updates From Our Final Report!

Six months ago Tania and I made an episode about the interim report for our AI Security Likelihood Project.. and it is finally time to discuss the final report! You'll see it live at this link shortly: https://www.aisecurityfundamentals.com/

The premise was simple: are AI security incidents happening in the wild? What can we learn about future incidents from these historic ones? We answer some of these questions.

Show more...
5 months ago
31 minutes 28 seconds

The AI Security Podcast
To open or close model weights?

In this episode, Tania and I discuss the debate around closed or open model weights. What do you think?


The RAND report we mention: https://www.rand.org/pubs/research_reports/RRA2849-1.html

Show more...
5 months ago
27 minutes 52 seconds

The AI Security Podcast
Creative prompt injection in the wild

In this episode, Tania and I talk through some creative examples of prompt injection/engineering we've seen in the wild.. think prompts hidden in papers, red-teaming and web-scraping.

Your Brain on ChatGPT: https://arxiv.org/pdf/2506.08872

Paper with hidden text (p. 12):  https://arxiv.org/abs/2502.19918v2

Interesting overview: https://www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/

Echoleak blog post: https://www.aim.security/lp/aim-labs-echoleak-m365


Show more...
5 months ago
31 minutes 10 seconds

The AI Security Podcast
Threat intel digest: 23 June 2025

This week we discussed multiple AI vulnerabilities, including Echolink in M365 Copilot, Agent Smith in Langchain, and a SQL injection flaw in Llama Index, all of which have been patched. We also covered a data exposure bug in Asana's MCP server and OWASP's project to create an AI vulnerability scoring system, while also outlining Google's defense layers for Gemini, Thomas Roccia's Proximity tool for MCP server security, news regarding AI and legal/security concerns, and research on AI hacking AI, prompt compression, multi-agent security protocols, and the security of reasoning models versus LLMs.

Show more...
6 months ago
52 minutes 13 seconds

The AI Security Podcast
AI safety evaluations with Inspect

I'm back from holiday, and this week Tania and I talk about a project she completed as part of the ARENA AI safety curriculum to replicate the findings of evaluations on frontier AI capabilities.


Link to reasoning paper: https://arxiv.org/abs/2502.09696

Link to the Inspect dashboard: https://inspect-evals-dashboard.streamlit.app/

ARENA AI Safety course: https://www.arena.education/

Show more...
6 months ago
32 minutes 52 seconds

The AI Security Podcast
Threat intel digest: 9 June 2025

This week we try a new condensed format for the AI security digest! we covered critical CVEs, including vulnerabilities in AWS MCP, Llama Index, GitHub MCP integration, and tool poisoning attacks. We also reported on malware campaigns using spoofed AI installers, a supply chain attack via fake PyTorch models, and the AI-guided discovery of a Linux kernel vulnerability by Sean Healin using OpenAI's 03 model. We addressed OpenAI's actions against malicious use of their models, Reddit's lawsuit against Anthropic for data scraping, the creation of an AI model for reconstructing 3D faces from DNA by Chinese researchers, a zero-trust framework for AI agent identity management proposed by the Cloud Security Alliance, research on an agent-based red teaming framework, the impact of context length on LLM vulnerability, and CSIRO's technique for improving deep fake detection. We also highlighted the vulnerablemcp.info project and the ongoing evolution of AI security best practices.

Sign up to get the digest in your inbox: http://eepurl.com/i7RgRM

Show more...
6 months ago
54 minutes 57 seconds

The AI Security Podcast
Threat intel digest: 26 May 2025

Sign up to receive in your inbox: http://eepurl.com/i7RgRM

Tania Sadhani and Miranda R discussed various AI security topics, including critical CVEs affecting platforms like ChatGPT and Hugging Face, the potential for SharePoint Copilot in internal reconnaissance, and malicious npm packages targeting Cursor developers. They also covered the OASP Gen AI security initiative's Agent Name Service (ANS), the proposed AI.txt for controlling AI agent interactions, and Unit 42's framework for agentic AI attacks. Furthermore, Miranda highlighted security guidance from international agencies, Anthropic triggering ASL 3 for Claude Opus 4, Microsoft's AI red teaming playground, a significant data leak from an AI vendor, and the Israeli police's use of AI-hallucinated laws.

Show more...
7 months ago
39 minutes 23 seconds

The AI Security Podcast
AI Vulnerability Research with Aditya Rana

Ever wondered how security vulnerabilities are found in AI? Join us as we chat with Aditya, a Vulnerability Researcher at Mileva Security Labs!

Show more...
7 months ago
38 minutes 43 seconds

The AI Security Podcast
Threat intel digest: 12 May 2025

Sign up to receive in your inbox: http://eepurl.com/i7RgRM


This week we note regular CVEs in AI libraries such as Nvidia TensorFlow and PyTorch. We discuss a novel prompt injection technique called "policy puppetry", along with malware dispersal through fake AI video generators and Meta's release of an open-source AI security tool set including Llama Firewall. We also covered Israel's experimental use of AI in warfare, Russia's AI-enabled drones in Ukraine, China's crackdown on AI misuse, Dreadnode's research on AI in red teaming, geolocation doxing via multimodal LLMs, safety research on autonomous vehicle attacks targeting inference time, Config Scan for analyzing malicious configurations on Hugging Face, Spotlight as a physical solution against deepfakes, and Reply Bench for benchmarking autonomous replication of LLM agents.

Show more...
7 months ago
48 minutes 22 seconds

The AI Security Podcast
The evolution of data science and AI ethics with Dr Alberto Chierici

This week I'm joined by my friend Alberto, he has an incredible storied career - from data science, insurance, AI risk, advising Tesla.. check out his book here! 
https://www.amazon.com.au/Ethics-I-Facts-Fictions-Forecasts/dp/1636763650

Show more...
8 months ago
49 minutes 59 seconds

The AI Security Podcast
Stanford's 2025 AI Index Report

We talk about Stanford Human-Centred AI's latest AI Index report, check it out here: https://hai.stanford.edu/ai-index/2025-ai-index-report

Show more...
8 months ago
35 minutes 36 seconds

The AI Security Podcast
Threat intel digest: 28 April 2025

Did you know we have a fortnightly threat intel newsletter? We decided there was so much good research in there we have to talk about it here! We're joined by threat intel lead Miranda for this fortnight's biggest AI security news, coming out in this week's digest! http://eepurl.com/i7RgRM

Show more...
8 months ago
37 minutes 41 seconds

The AI Security Podcast

I missed the boat in computer hacking so now I hack AI instead. This podcast discusses all things at the intersection of AI and security. Hosted by me (Harriet Farlow aka. HarrietHacks) and Tania Sadhani and supported by Mileva Security Labs. 

Chat with Mileva Security Labs for your AI Security training and advisory needs: https://milevalabs.com/

Reach out to HarrietHacks if you want us to speak at your event: https://www.harriethacks.com/