Exploring Information Security - Exploring Information Security
Timothy De Block
100 episodes
2 days ago
Summary:
Timothy De Block is joined by Sam Chehab to unpack the key findings of the 2025 Postman State of the API Report. Sam emphasizes that APIs are the connective tissue of the modern world and that the biggest security challenges are rooted in fundamentals. The conversation dives deep into how AI agents are transforming API development and consumption, introducing new threats like "rug pulls" , and demanding higher quality documentation and error messages. Sam also shares actionable advice for engineers, including a "cheat code" for getting organizational buy-in for AI tools and a detailed breakdown of the new Model Context Protocol (MCP).
Key Insights from the State of the API Report
API Fundamentals are Still the Problem: The start of every security journey is an inventory problem (the first two CIS controls). Security success is a byproduct of solving collaboration problems for developers first.
The Collaboration Crisis: 93% of teams are struggling with API collaboration, leading to duplicated work and an ever-widening attack surface due to decentralized documentation (Slack, Confluence, etc.).
API Documentation is Up: A positive sign of progress is that 58% of teams surveyed are actively documenting their APIs to improve collaboration.
Unauthorized Access Risk: 51% of developers cite unauthorized agent access as a top security risk. Sam suspects this is predominantly due to the industry-wide "hot mess" of secrets management and leaked API keys.
Credential Amplification: This term is used to describe how risk is exponential, not linear, when one credential gains access to a service that, in turn, has access to multiple other services (i.e., lateral movement).
AI, MCP, and New Security Challenges
Model Context Protocol (MCP): MCP is a protocol layer that sits on top of existing RESTful services, allowing users to generically interact with APIs using natural language. It acts as an abstraction layer, translating natural language requests into the proper API calls.
The AI API Readiness Checklist: For APIs to be effective for AI agents:
Rich Documentation: AI thrives on documentation, which developers generally hate writing. Using AI to write documentation is key.
Rich Errors: APIs need contextual error messages (e.g., "invalid parameter, expected X, received Y") instead of generic messages like "something broke".
AI Introduces Supply Chain Threats: The "rug pull" threat involves blindly trusting an MCP server that is then swapped out for a malicious one. This is a classic supply chain problem (similar to NPM issues) that can happen much faster in the AI world.
MCP Supply Chain Risk: Because you can use other people's MCP servers, developers must validate which MCP servers they're using to avoid running untrusted code. The first reported MCP hack involved a server that silently BCC'd an email to the attacker every time an action was performed.
Actionable Advice and Engineer "Cheat Codes"
Security Shift-Left with Postman: Security teams should support engineering's use of tools like Postman because it allows developers to run security tests (load testing, denial of service simulation, black box testing) themselves within their normal workflow, accelerating development velocity.
API Key Management is Critical: Organizations need policies around API key generation, expiration, and revocation. Postman actively scans public repos (like GitHub) for leaked Postman keys, auto-revokes them, and notifies the administrator.
Getting AI Buy-in (The Cheat Code): To get an AI tool (like a Postman agent or a code generator) approved within your organization, use this tactic:
Generate a DPA (Data Processing Agreement) using an AI tool.
Present the DPA and a request for an Enterprise License to Legal, Security, and your manager.
This demonstrates due diligence and opens the door for safe, approved AI use, making you an engineering "hero".
About Postman and the Report
Postman's Reach: Postman is considered the de facto standard for API development and is used in 98% of the Fortune 500.
Report Origins: The annual report, now in its seventh year, was started because no one else was effectively collecting and synthesizing data across executives, managers, developers, and consultants regarding API production and consumption.
All content for Exploring Information Security - Exploring Information Security is the property of Timothy De Block and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Summary:
Timothy De Block is joined by Sam Chehab to unpack the key findings of the 2025 Postman State of the API Report. Sam emphasizes that APIs are the connective tissue of the modern world and that the biggest security challenges are rooted in fundamentals. The conversation dives deep into how AI agents are transforming API development and consumption, introducing new threats like "rug pulls" , and demanding higher quality documentation and error messages. Sam also shares actionable advice for engineers, including a "cheat code" for getting organizational buy-in for AI tools and a detailed breakdown of the new Model Context Protocol (MCP).
Key Insights from the State of the API Report
API Fundamentals are Still the Problem: The start of every security journey is an inventory problem (the first two CIS controls). Security success is a byproduct of solving collaboration problems for developers first.
The Collaboration Crisis: 93% of teams are struggling with API collaboration, leading to duplicated work and an ever-widening attack surface due to decentralized documentation (Slack, Confluence, etc.).
API Documentation is Up: A positive sign of progress is that 58% of teams surveyed are actively documenting their APIs to improve collaboration.
Unauthorized Access Risk: 51% of developers cite unauthorized agent access as a top security risk. Sam suspects this is predominantly due to the industry-wide "hot mess" of secrets management and leaked API keys.
Credential Amplification: This term is used to describe how risk is exponential, not linear, when one credential gains access to a service that, in turn, has access to multiple other services (i.e., lateral movement).
AI, MCP, and New Security Challenges
Model Context Protocol (MCP): MCP is a protocol layer that sits on top of existing RESTful services, allowing users to generically interact with APIs using natural language. It acts as an abstraction layer, translating natural language requests into the proper API calls.
The AI API Readiness Checklist: For APIs to be effective for AI agents:
Rich Documentation: AI thrives on documentation, which developers generally hate writing. Using AI to write documentation is key.
Rich Errors: APIs need contextual error messages (e.g., "invalid parameter, expected X, received Y") instead of generic messages like "something broke".
AI Introduces Supply Chain Threats: The "rug pull" threat involves blindly trusting an MCP server that is then swapped out for a malicious one. This is a classic supply chain problem (similar to NPM issues) that can happen much faster in the AI world.
MCP Supply Chain Risk: Because you can use other people's MCP servers, developers must validate which MCP servers they're using to avoid running untrusted code. The first reported MCP hack involved a server that silently BCC'd an email to the attacker every time an action was performed.
Actionable Advice and Engineer "Cheat Codes"
Security Shift-Left with Postman: Security teams should support engineering's use of tools like Postman because it allows developers to run security tests (load testing, denial of service simulation, black box testing) themselves within their normal workflow, accelerating development velocity.
API Key Management is Critical: Organizations need policies around API key generation, expiration, and revocation. Postman actively scans public repos (like GitHub) for leaked Postman keys, auto-revokes them, and notifies the administrator.
Getting AI Buy-in (The Cheat Code): To get an AI tool (like a Postman agent or a code generator) approved within your organization, use this tactic:
Generate a DPA (Data Processing Agreement) using an AI tool.
Present the DPA and a request for an Enterprise License to Legal, Security, and your manager.
This demonstrates due diligence and opens the door for safe, approved AI use, making you an engineering "hero".
About Postman and the Report
Postman's Reach: Postman is considered the de facto standard for API development and is used in 98% of the Fortune 500.
Report Origins: The annual report, now in its seventh year, was started because no one else was effectively collecting and synthesizing data across executives, managers, developers, and consultants regarding API production and consumption.
Exploring AI, APIs, and the Social Engineering of LLMs
Exploring Information Security - Exploring Information Security
52 minutes 13 seconds
1 month ago
Exploring AI, APIs, and the Social Engineering of LLMs
Summary:
Timothy De Block is joined by Keith Hoodlet, Engineering Director at Trail of Bits, for a fascinating, in-depth look at AI red teaming and the security challenges posed by Large Language Models (LLMs). They discuss how prompt injection is effectively a new form of social engineering against machines, exploiting the training data's inherent human biases and logical flaws. Keith breaks down the mechanics of LLM inference, the rise of middleware for AI security, and cutting-edge attacks using everything from emojis and bad grammar to weaponized image scaling. The episode stresses that the fundamental solutions—logging, monitoring, and robust security design—are simply timeless principles being applied to a terrifyingly fast-moving frontier.
Key Takeaways
The Prompt Injection Threat
Social Engineering the AI: Prompt injection works by exploiting the LLM's vast training data, which includes all of human history in digital format, including movies and fiction. Attackers use techniques that mirror social engineering to trick the model into doing something it's not supposed to, such as a customer service chatbot issuing an unauthorized refund.
Business Logic Flaws: Successful prompt injections are often tied to business logic flaws or a lack of proper checks and guardrails, similar to vulnerabilities seen in traditional applications and APIs.
Novel Attack Vectors: Attackers are finding creative ways to bypass guardrails:
Image Scaling: Trail of Bits discovered how to weaponize image scaling to hide prompt injections within images that appear benign to the user, but which pop out as visible text to the model when downscaled for inference.
Invisible Text: Attacks can use white text, zero-width characters (which don't show up when displayed or highlighted), or Unicode character smuggling in emails or prompts to covertly inject instructions.
Syntax & Emojis: Research has shown that bad grammar, run-on sentences, or even a simple sequence of emojis can successfully trigger prompt injections or jailbreaks.
Defense and Design
LLM Security is API Security: Since LLMs rely on APIs for their "tool access" and to perform actions (like sending an email or issuing a refund), security comes down to the same principles used for APIs: proper authorization, access control, and eliminating misconfiguration.
The Middleware Layer: Some companies are using middleware that sits between their application and the Frontier LLMs (like GPT or Claude) to handle system prompting, guard-railing, and filtering prompts, effectively acting as a Web Application Firewall (WAF) for LLM API calls.
Security Design Patterns: To defend against prompt injection, security design patterns are key:
Action-Selector Pattern: Instead of a text field, users click on pre-defined buttons that limit the model to a very specific set of safe actions.
Code-Then-Execute Pattern (CaMeL): The first LLM is used to write code (e.g., Pythonic code) based on the natural language prompt, and a second, quarantined LLM executes that safer code.
Map-Reduce Pattern: The prompt is broken into smaller chunks, processed, and then passed to another model, making it harder for a prompt injection to be maintained across the process.
Timeless Hygiene: The most critical defenses are logging, monitoring, and alerting. You must log prompts and outputs and monitor for abnormal behavior, such as a user suddenly querying a database thousands of times a minute or asking a chatbot to write Python code.
Resources & Links Mentioned
Trail of Bits Research:
Blog: blog.trailofbits.com
Company Site: trailofbits.com
Weaponizing image scaling against production AI systems
Call Me A Jerk: Persuading AI to Comply with Objectionable Requests
Securing LLM Agents Paper: Design Patterns for Securing LLM Agents against Prompt Injections.
Camel Prompt Injection
Defending LLM applications against Unicode character smuggling
Logit-Gap Steering: Efficient Short-Suffix Jailbreaks for Aligned Large Language Models
LLM Explanation: Three Blue One Brown (3Blue1Brown) has a great short video explaining how Large Language Models work.
Lakera Gandalf: Game for learning how to use prompt injection against AI
Keith Hoodlet's Personal Sites:
Website: securing.dev and thought.dev
Exploring Information Security - Exploring Information Security
Summary:
Timothy De Block is joined by Sam Chehab to unpack the key findings of the 2025 Postman State of the API Report. Sam emphasizes that APIs are the connective tissue of the modern world and that the biggest security challenges are rooted in fundamentals. The conversation dives deep into how AI agents are transforming API development and consumption, introducing new threats like "rug pulls" , and demanding higher quality documentation and error messages. Sam also shares actionable advice for engineers, including a "cheat code" for getting organizational buy-in for AI tools and a detailed breakdown of the new Model Context Protocol (MCP).
Key Insights from the State of the API Report
API Fundamentals are Still the Problem: The start of every security journey is an inventory problem (the first two CIS controls). Security success is a byproduct of solving collaboration problems for developers first.
The Collaboration Crisis: 93% of teams are struggling with API collaboration, leading to duplicated work and an ever-widening attack surface due to decentralized documentation (Slack, Confluence, etc.).
API Documentation is Up: A positive sign of progress is that 58% of teams surveyed are actively documenting their APIs to improve collaboration.
Unauthorized Access Risk: 51% of developers cite unauthorized agent access as a top security risk. Sam suspects this is predominantly due to the industry-wide "hot mess" of secrets management and leaked API keys.
Credential Amplification: This term is used to describe how risk is exponential, not linear, when one credential gains access to a service that, in turn, has access to multiple other services (i.e., lateral movement).
AI, MCP, and New Security Challenges
Model Context Protocol (MCP): MCP is a protocol layer that sits on top of existing RESTful services, allowing users to generically interact with APIs using natural language. It acts as an abstraction layer, translating natural language requests into the proper API calls.
The AI API Readiness Checklist: For APIs to be effective for AI agents:
Rich Documentation: AI thrives on documentation, which developers generally hate writing. Using AI to write documentation is key.
Rich Errors: APIs need contextual error messages (e.g., "invalid parameter, expected X, received Y") instead of generic messages like "something broke".
AI Introduces Supply Chain Threats: The "rug pull" threat involves blindly trusting an MCP server that is then swapped out for a malicious one. This is a classic supply chain problem (similar to NPM issues) that can happen much faster in the AI world.
MCP Supply Chain Risk: Because you can use other people's MCP servers, developers must validate which MCP servers they're using to avoid running untrusted code. The first reported MCP hack involved a server that silently BCC'd an email to the attacker every time an action was performed.
Actionable Advice and Engineer "Cheat Codes"
Security Shift-Left with Postman: Security teams should support engineering's use of tools like Postman because it allows developers to run security tests (load testing, denial of service simulation, black box testing) themselves within their normal workflow, accelerating development velocity.
API Key Management is Critical: Organizations need policies around API key generation, expiration, and revocation. Postman actively scans public repos (like GitHub) for leaked Postman keys, auto-revokes them, and notifies the administrator.
Getting AI Buy-in (The Cheat Code): To get an AI tool (like a Postman agent or a code generator) approved within your organization, use this tactic:
Generate a DPA (Data Processing Agreement) using an AI tool.
Present the DPA and a request for an Enterprise License to Legal, Security, and your manager.
This demonstrates due diligence and opens the door for safe, approved AI use, making you an engineering "hero".
About Postman and the Report
Postman's Reach: Postman is considered the de facto standard for API development and is used in 98% of the Fortune 500.
Report Origins: The annual report, now in its seventh year, was started because no one else was effectively collecting and synthesizing data across executives, managers, developers, and consultants regarding API production and consumption.