Think about a plumber and an electrician working on your house. Not particularly sexy jobs—awkward, messy, and frustrating at times. Customers never see the dirty parts: making holes, sometimes dealing with literal shit, all with the goal of everything being fully functional and looking as pretty as it was before.
Today's Agentic AI conversation is the reason I bring up the trades people. Everyone's obsessed with the wrapping of AI—the shiny interfaces, the impressive demos, the boardroom buzzwords. But like those hidden pipes and buried electrical lines, there's a huge part of the iceberg that makes agentic AI work. And just like bad plumbing, when it fails, it's catastrophic.
While everyone debates AI bias and hallucinations, another threat is here: the infrastructure connecting AI agents to your enterprise systems.
CB Insights reports that 86% of companies experienced AI-related security incidents in the past year. The AI agent security market is now the fastest-growing cybersecurity segment, part of an ecosystem projected to exceed $134 billion by 2030.
MCP servers are bleeding secrets—many are misconfigured, exposing files and credentials that attackers can grab without effort. It's the digital equivalent of leaving your window unlocked in the wrong part of town. Supply chain exploits ripple fast, is your third party as secure as you are. One bad connection brings down the whole system.
Agents move faster, plug into more systems, and create more risk at every turn, then anything seen before. When they fail, they do it quickly and at scale.
The question every CIO should ask: Would you let an unvetted contractor work on your electrical system without permits or oversight? Then why are you letting unvetted MCP tools run inside your AI stack?
AI agents create entirely new attack vectors that traditional cybersecurity tools can't handle: prompt injection, model poisoning, and data leakage vulnerabilities that require specialised detection. These aren't theoretical risks—they're operational realities lurking behind the pretty interfaces.
We're all focused on the finished product, but the real work happens in the spaces no one wants to think about. The infrastructure. The protocols. The configurations. The messy, unglamorous stuff that actually makes everything work.
It is the season of M&A in this nascent space. Cisco acquired Robust Intelligence, Coralogix bought Aporia, SentinelOne grabbed Prompt Security. This isn't diversification—it's recognition that AI agent security is fundamental infrastructure, as critical as the electrical system keeping your lights on.
The EU AI Act demands reconstructable chains of events and root cause analysis for AI incidents. Most organisations can't meet these requirements because their telemetry misses critical data—like trying to diagnose a plumbing problem without being able to see behind the walls.
Current reporting omits model reasoning traces, scaffolding logs, and tool state information. Without this data, root cause analysis becomes guesswork—like a plumber trying to fix a leak without knowing where the pipes run.
The first major enterprise AI breach won't be a model gone rogue. It'll be about the plumbing we ignored—misconfigurations, monitoring gaps, and infrastructure oversights that leave agents vulnerable.
Organizations implementing forensics-by-design now will harness AI benefits while managing risks. Those fixated on visible AI risks while ignoring infrastructure security will find themselves up creek without a paddle.
Agent failures aren't random. They follow traceable chains, provided you choose to log them. The question is whether companies invest in the unglamorous infrastructure work now, or pay for emergency repairs later when everything's already flooded.