Model Context Protocol (MCP) has quickly become the connective tissue of modern AI agents. It allows large language model–powered systems to dynamically access tools, query documentation, modify codebases, and interact with enterprise infrastructure using natural language. For developers, the experience can feel seamless. For security teams, it represents a structural shift in risk.

In a recent AI-360 webinar, ‘MCP Security in Production: Supply Chain Risk, Proven RCE & Identity Gaps’, Aron Eidelman, Senior Developer Relations Engineer of Security at Google, and David Pierce, Architect for AI Security and Data Observability at PayPal, moderated by Stewart Tinson, Project Director at AI-360, discussed the unsettling premise at the heart of this shift: “The difference between MCP and traditional protocols is that we now have to fundamentally not trust the agent that's interacting with this information,” Eidelman said.

That statement captures why MCP is not just another API abstraction. Traditional integrations are deterministic. Developers explicitly code behavior, define flows, and review what happens at each step. MCP-powered agents, by contrast, decide at runtime which tools to invoke and in what sequence. They can chain calls together, interpret documentation, and make decisions that were never explicitly programmed. That autonomy amplifies both capability and risk.

Unlike APIs or webhooks, which rely on deterministic parsing, MCP environments rely on probabilistic interpretation. Pierce explained that agents are “susceptible to inline manipulation,” and that vulnerability mirrors techniques long seen in cross-site scripting and injection attacks. The difference is that now the interpreter is a model that may not reliably distinguish a benign instruction from a malicious one embedded in context.

Pierce explained the concept of agentic shadow logic, a critical security vulnerability and operational risk where autonomous AI agents, operating without formal IT oversight (Shadow AI), develop or are manipulated into using hidden, non-transparent reasoning paths to execute unauthorized actions. “It’s like a sleeper agent,” he said, explaining that the implanted signals, unknown to the developer that instructs the agent, lie in wait to function by “exploiting this underlying susceptibility of models, that they don't know the difference between instruction and attack.”

The pace of adoption has only heightened concerns. MCP lowers the barrier to integrating powerful external systems into AI workflows. Developers can connect to tools with minimal configuration, unlocking capabilities that previously required complex integration work. But the security scaffolding has not kept up. “We have opened the flood gates,” Pierce warned.

Many implementations rely on basic authentication patterns, limited message validation, and minimal observability. Authorization often defaults to all-or-nothing access. If an agent can reach an MCP server, it frequently has access to every tool exposed behind it.

The tension between speed and safety is familiar in enterprise IT, but MCP intensifies it. “There’s a lot of pressure on product teams, there’s a lot of pressure on security. If only one dominates, that’s a short victory,” Pierce cautioned. Ploughing ahead insecurely creates downstream risk; locking everything down drives developers to work around controls. Neither outcome is sustainable.

Perhaps the most consequential reframing in the discussion was the idea that MCP servers function as a new software supply chain. The ecosystem is heavily dependent on third-party developers, open registries, and rapidly evolving tooling. That creates attack vectors that echo classic dependency risks: typo-squatting, developer impersonation, and malicious dependencies.

“There are a lot of ways to exploit the trusting developer who wants access to these tools,” Eidelman said. “As we start to see compliance and regulation move in with MCPS, and people start needing accessibility features or privacy features, that is a great way to actually sneak in something malicious that developers are unaware. Because all these new requirements are the types of abstractions a standard developer might not be familiar with.”

The problem is compounded by the fact that MCP tools are often loaded dynamically, with limited visibility into their exact capabilities. Without strict registry controls, version pinning, and signing mechanisms, enterprises risk replicating the early, chaotic days of open-source package management – but now with autonomous agents acting on poisoned inputs.

One particularly concerning scenario involves indirect prompt injection. Eidelman explained that unlike direct prompt attacks, which target the model explicitly, indirect attacks hide malicious instructions in resources the agent consumes – documentation files, metadata fields, or configuration text. A developer may instruct an agent to read documentation and implement a feature. If a malicious “README” contains hidden instructions crafted specifically for coding agents, the system can incorporate harmful logic without the human ever seeing the exploit.

MCP Security in Production: Supply Chain Risk, Proven RCE & Identity Gaps
Most enterprise AI teams are deploying MCP servers with minimal security in place. Researchers have already proven remote code execution on publicly accessible MCP servers — and the supply chain currently has no signed repos, no attestation, and no chain of custody. You’ll learn: -Why MCP authorization is currently all-or-nothing — and what that means for enterprise risk -How indirect prompt injection through MCP tools can manipulate agent behaviour with zero user awareness -Why token pass-through breaks audit trails and how OAuth 2.1 delegation (RFC 8693) addresses it -What the non-human identity governance gap looks like — and what security teams can do today -How to offer developers well-lit paths rather than becoming the department of no Key topics: MCP supply chain risk • Tool poisoning & rug pulls • Indirect prompt injection • Token delegation & confused deputy • Non-human identity governance • SLSA-based attestation • Log4Shell parallels • SAST/DAST/SCA for MCP • AI gateway & tool registry • Zero-trust metadata-driven identity For CISOs, security architects, platform engineers, and AI/ML teams responsible for securing agentic AI in production. All viewers will receive a c’heat sheet’ compiling links galore courtesy of David Pierce & Aron Eidelman.

For enterprises, the first line of defense is governance over tool discovery. Allowing agents to browse or install arbitrary MCP servers introduces uncontrolled supply chain risk. Eidelman was clear that Enterprises should have their own registry and not allow developers to go out and randomly try to find their own resources.

A curated internal registry, combined with publisher verification, usage metrics, and lifecycle management, mirrors best practices in container and artifact management. Without that structure, the ecosystem is highly vulnerable.

Identity and token management are equally critical. Token pass-through – where MCP servers forward client tokens directly to downstream APIs – creates audit gaps and confused deputy risks. Long-lived credentials compound the problem. The emerging best practice is short-lived, narrowly scoped delegation flows, where tokens are exchanged and bound to specific identities and actions.

At a broader architectural level, abstraction can help, particularly when it comes to schema normalization and drift detection. Hashing normalized schemas and performing diffs can detect unexpected changes in tool definitions or outputs. But automation has limits. “You can take the hash of your schema and do a diff… but it's not going to be able to review the parameters, that’s the human job,” Pierce explained.

Ultimately, MCP does not have to be a net-negative for security. In fact, with proper implementation, it may improve visibility. As Pierce put it: “MCP is honestly an opportunity to flatten even if there are new risks.” By treating tools as applications, binding identity to invocation, and routing AI traffic through centralized gateways, enterprises can build more coherent control planes than many legacy API catalogs provided.

The immediate takeaway for security leaders is pragmatic rather than alarming. Instead of blocking adoption outright, they should provide structured, secure defaults.

Eidelman’s key takeaway was: “Security teams should offer well-lit paths – they should offer a way to go about doing this from a developer perspective.”

That means curated registries, AI gateways, identity-bound tool invocation, and data-classification–driven controls. MCP changes the operational model of AI integration. It also changes the trust model.

The bottom line: the agent must not be trusted by default.

MCP Security in Production: Supply Chain Risk, Proven RCE & Identity Gaps
Most enterprise AI teams are deploying MCP servers with minimal security in place. Researchers have already proven remote code execution on publicly accessible MCP servers — and the supply chain currently has no signed repos, no attestation, and no chain of custody. You’ll learn: -Why MCP authorization is currently all-or-nothing — and what that means for enterprise risk -How indirect prompt injection through MCP tools can manipulate agent behaviour with zero user awareness -Why token pass-through breaks audit trails and how OAuth 2.1 delegation (RFC 8693) addresses it -What the non-human identity governance gap looks like — and what security teams can do today -How to offer developers well-lit paths rather than becoming the department of no Key topics: MCP supply chain risk • Tool poisoning & rug pulls • Indirect prompt injection • Token delegation & confused deputy • Non-human identity governance • SLSA-based attestation • Log4Shell parallels • SAST/DAST/SCA for MCP • AI gateway & tool registry • Zero-trust metadata-driven identity For CISOs, security architects, platform engineers, and AI/ML teams responsible for securing agentic AI in production.

Share this post
The link has been copied!