Exploring Information Security - Exploring Information Security
Timothy De Block
100 episodes
2 days ago
Summary:
Timothy De Block is joined by Sam Chehab to unpack the key findings of the 2025 Postman State of the API Report. Sam emphasizes that APIs are the connective tissue of the modern world and that the biggest security challenges are rooted in fundamentals. The conversation dives deep into how AI agents are transforming API development and consumption, introducing new threats like "rug pulls" , and demanding higher quality documentation and error messages. Sam also shares actionable advice for engineers, including a "cheat code" for getting organizational buy-in for AI tools and a detailed breakdown of the new Model Context Protocol (MCP).
Key Insights from the State of the API Report
API Fundamentals are Still the Problem: The start of every security journey is an inventory problem (the first two CIS controls). Security success is a byproduct of solving collaboration problems for developers first.
The Collaboration Crisis: 93% of teams are struggling with API collaboration, leading to duplicated work and an ever-widening attack surface due to decentralized documentation (Slack, Confluence, etc.).
API Documentation is Up: A positive sign of progress is that 58% of teams surveyed are actively documenting their APIs to improve collaboration.
Unauthorized Access Risk: 51% of developers cite unauthorized agent access as a top security risk. Sam suspects this is predominantly due to the industry-wide "hot mess" of secrets management and leaked API keys.
Credential Amplification: This term is used to describe how risk is exponential, not linear, when one credential gains access to a service that, in turn, has access to multiple other services (i.e., lateral movement).
AI, MCP, and New Security Challenges
Model Context Protocol (MCP): MCP is a protocol layer that sits on top of existing RESTful services, allowing users to generically interact with APIs using natural language. It acts as an abstraction layer, translating natural language requests into the proper API calls.
The AI API Readiness Checklist: For APIs to be effective for AI agents:
Rich Documentation: AI thrives on documentation, which developers generally hate writing. Using AI to write documentation is key.
Rich Errors: APIs need contextual error messages (e.g., "invalid parameter, expected X, received Y") instead of generic messages like "something broke".
AI Introduces Supply Chain Threats: The "rug pull" threat involves blindly trusting an MCP server that is then swapped out for a malicious one. This is a classic supply chain problem (similar to NPM issues) that can happen much faster in the AI world.
MCP Supply Chain Risk: Because you can use other people's MCP servers, developers must validate which MCP servers they're using to avoid running untrusted code. The first reported MCP hack involved a server that silently BCC'd an email to the attacker every time an action was performed.
Actionable Advice and Engineer "Cheat Codes"
Security Shift-Left with Postman: Security teams should support engineering's use of tools like Postman because it allows developers to run security tests (load testing, denial of service simulation, black box testing) themselves within their normal workflow, accelerating development velocity.
API Key Management is Critical: Organizations need policies around API key generation, expiration, and revocation. Postman actively scans public repos (like GitHub) for leaked Postman keys, auto-revokes them, and notifies the administrator.
Getting AI Buy-in (The Cheat Code): To get an AI tool (like a Postman agent or a code generator) approved within your organization, use this tactic:
Generate a DPA (Data Processing Agreement) using an AI tool.
Present the DPA and a request for an Enterprise License to Legal, Security, and your manager.
This demonstrates due diligence and opens the door for safe, approved AI use, making you an engineering "hero".
About Postman and the Report
Postman's Reach: Postman is considered the de facto standard for API development and is used in 98% of the Fortune 500.
Report Origins: The annual report, now in its seventh year, was started because no one else was effectively collecting and synthesizing data across executives, managers, developers, and consultants regarding API production and consumption.
All content for Exploring Information Security - Exploring Information Security is the property of Timothy De Block and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Summary:
Timothy De Block is joined by Sam Chehab to unpack the key findings of the 2025 Postman State of the API Report. Sam emphasizes that APIs are the connective tissue of the modern world and that the biggest security challenges are rooted in fundamentals. The conversation dives deep into how AI agents are transforming API development and consumption, introducing new threats like "rug pulls" , and demanding higher quality documentation and error messages. Sam also shares actionable advice for engineers, including a "cheat code" for getting organizational buy-in for AI tools and a detailed breakdown of the new Model Context Protocol (MCP).
Key Insights from the State of the API Report
API Fundamentals are Still the Problem: The start of every security journey is an inventory problem (the first two CIS controls). Security success is a byproduct of solving collaboration problems for developers first.
The Collaboration Crisis: 93% of teams are struggling with API collaboration, leading to duplicated work and an ever-widening attack surface due to decentralized documentation (Slack, Confluence, etc.).
API Documentation is Up: A positive sign of progress is that 58% of teams surveyed are actively documenting their APIs to improve collaboration.
Unauthorized Access Risk: 51% of developers cite unauthorized agent access as a top security risk. Sam suspects this is predominantly due to the industry-wide "hot mess" of secrets management and leaked API keys.
Credential Amplification: This term is used to describe how risk is exponential, not linear, when one credential gains access to a service that, in turn, has access to multiple other services (i.e., lateral movement).
AI, MCP, and New Security Challenges
Model Context Protocol (MCP): MCP is a protocol layer that sits on top of existing RESTful services, allowing users to generically interact with APIs using natural language. It acts as an abstraction layer, translating natural language requests into the proper API calls.
The AI API Readiness Checklist: For APIs to be effective for AI agents:
Rich Documentation: AI thrives on documentation, which developers generally hate writing. Using AI to write documentation is key.
Rich Errors: APIs need contextual error messages (e.g., "invalid parameter, expected X, received Y") instead of generic messages like "something broke".
AI Introduces Supply Chain Threats: The "rug pull" threat involves blindly trusting an MCP server that is then swapped out for a malicious one. This is a classic supply chain problem (similar to NPM issues) that can happen much faster in the AI world.
MCP Supply Chain Risk: Because you can use other people's MCP servers, developers must validate which MCP servers they're using to avoid running untrusted code. The first reported MCP hack involved a server that silently BCC'd an email to the attacker every time an action was performed.
Actionable Advice and Engineer "Cheat Codes"
Security Shift-Left with Postman: Security teams should support engineering's use of tools like Postman because it allows developers to run security tests (load testing, denial of service simulation, black box testing) themselves within their normal workflow, accelerating development velocity.
API Key Management is Critical: Organizations need policies around API key generation, expiration, and revocation. Postman actively scans public repos (like GitHub) for leaked Postman keys, auto-revokes them, and notifies the administrator.
Getting AI Buy-in (The Cheat Code): To get an AI tool (like a Postman agent or a code generator) approved within your organization, use this tactic:
Generate a DPA (Data Processing Agreement) using an AI tool.
Present the DPA and a request for an Enterprise License to Legal, Security, and your manager.
This demonstrates due diligence and opens the door for safe, approved AI use, making you an engineering "hero".
About Postman and the Report
Postman's Reach: Postman is considered the de facto standard for API development and is used in 98% of the Fortune 500.
Report Origins: The annual report, now in its seventh year, was started because no one else was effectively collecting and synthesizing data across executives, managers, developers, and consultants regarding API production and consumption.
Exploring the Rogue AI Agent Threat with Sam Chehab
Exploring Information Security - Exploring Information Security
39 minutes 1 second
1 month ago
Exploring the Rogue AI Agent Threat with Sam Chehab
Summary:
In a unique live recording, Timothy De Block is joined by Sam Chehab from Postman to tackle the intersection of AI and API security. The conversation goes beyond the hype of AI-created malware to focus on a more subtle, yet pervasive threat: "rogue AI agents." The speakers define these as sanctioned AI tools that, when misconfigured or given improper permissions, can cause significant havoc by misbehaving and exposing sensitive data. The episode emphasizes that this risk is not new, but an exacerbation of classic hygiene problems.
Key Takeaways
Defining "Rogue AI Agents": Sam Chehab defines a "rogue AI agent" as a sanctioned AI tool that misbehaves due to misconfiguration, often exposing data it shouldn't have access to. He likens it to an enterprise search tool in the early 2000s that crawled an intranet and surfaced things it wasn't supposed to.
The AI-API Connection: An AI agent is comprised of six components, and the "tool" component is where it interacts with APIs. The speakers note that the AI's APIs are its "arms and legs" and are often where it gets into trouble.
The Importance of Security Hygiene: The core of the solution is to "go back to basics" with good hygiene. This includes building APIs with an open API spec, enforcing schemas, and ensuring single-purpose logins for integrations to improve traceability.
The Rise of the "Citizen Developer": The conversation highlights a new security vector: non-developers, or "citizen developers," in departments like HR and finance building their own agents using enterprise tools. These individuals often lack security fundamentals, and their workflows are a "ripe area for risk".
AI's Role in Development: Sam and Timothy discuss how AI can augment a developer's capabilities, but a human is still needed in the process. The report from Veracode notes that AI-generated code is only secure about 45% of the time, which is about on par with human-written code. The best approach is to use AI to fix specific lines of code in pre-commit, rather than having it write entire applications.
Resources & Links Mentioned
Postman State of the API Report: This report, which discusses API trends and security, will be released on October 8th. The speakers tease a follow-up episode to dive into its findings.
Veracode: The 2025 GenAI Code Security Report was mentioned in the discussion on AI-generated code.
GitGuardian: The State of Secrets Sprawl report was referenced as a key resource.
Cloudflare: Mentioned as a service for API shield and monitoring API traffic.
News Sites: Sam Chehab recommends Security Affairs, The Hacker News, Cybernews, and Information Security Magazine for staying up-to-date.
Exploring Information Security - Exploring Information Security
Summary:
Timothy De Block is joined by Sam Chehab to unpack the key findings of the 2025 Postman State of the API Report. Sam emphasizes that APIs are the connective tissue of the modern world and that the biggest security challenges are rooted in fundamentals. The conversation dives deep into how AI agents are transforming API development and consumption, introducing new threats like "rug pulls" , and demanding higher quality documentation and error messages. Sam also shares actionable advice for engineers, including a "cheat code" for getting organizational buy-in for AI tools and a detailed breakdown of the new Model Context Protocol (MCP).
Key Insights from the State of the API Report
API Fundamentals are Still the Problem: The start of every security journey is an inventory problem (the first two CIS controls). Security success is a byproduct of solving collaboration problems for developers first.
The Collaboration Crisis: 93% of teams are struggling with API collaboration, leading to duplicated work and an ever-widening attack surface due to decentralized documentation (Slack, Confluence, etc.).
API Documentation is Up: A positive sign of progress is that 58% of teams surveyed are actively documenting their APIs to improve collaboration.
Unauthorized Access Risk: 51% of developers cite unauthorized agent access as a top security risk. Sam suspects this is predominantly due to the industry-wide "hot mess" of secrets management and leaked API keys.
Credential Amplification: This term is used to describe how risk is exponential, not linear, when one credential gains access to a service that, in turn, has access to multiple other services (i.e., lateral movement).
AI, MCP, and New Security Challenges
Model Context Protocol (MCP): MCP is a protocol layer that sits on top of existing RESTful services, allowing users to generically interact with APIs using natural language. It acts as an abstraction layer, translating natural language requests into the proper API calls.
The AI API Readiness Checklist: For APIs to be effective for AI agents:
Rich Documentation: AI thrives on documentation, which developers generally hate writing. Using AI to write documentation is key.
Rich Errors: APIs need contextual error messages (e.g., "invalid parameter, expected X, received Y") instead of generic messages like "something broke".
AI Introduces Supply Chain Threats: The "rug pull" threat involves blindly trusting an MCP server that is then swapped out for a malicious one. This is a classic supply chain problem (similar to NPM issues) that can happen much faster in the AI world.
MCP Supply Chain Risk: Because you can use other people's MCP servers, developers must validate which MCP servers they're using to avoid running untrusted code. The first reported MCP hack involved a server that silently BCC'd an email to the attacker every time an action was performed.
Actionable Advice and Engineer "Cheat Codes"
Security Shift-Left with Postman: Security teams should support engineering's use of tools like Postman because it allows developers to run security tests (load testing, denial of service simulation, black box testing) themselves within their normal workflow, accelerating development velocity.
API Key Management is Critical: Organizations need policies around API key generation, expiration, and revocation. Postman actively scans public repos (like GitHub) for leaked Postman keys, auto-revokes them, and notifies the administrator.
Getting AI Buy-in (The Cheat Code): To get an AI tool (like a Postman agent or a code generator) approved within your organization, use this tactic:
Generate a DPA (Data Processing Agreement) using an AI tool.
Present the DPA and a request for an Enterprise License to Legal, Security, and your manager.
This demonstrates due diligence and opens the door for safe, approved AI use, making you an engineering "hero".
About Postman and the Report
Postman's Reach: Postman is considered the de facto standard for API development and is used in 98% of the Fortune 500.
Report Origins: The annual report, now in its seventh year, was started because no one else was effectively collecting and synthesizing data across executives, managers, developers, and consultants regarding API production and consumption.