Microsoft has detailed its Agent Factory security blueprint, a comprehensive framework designed to address enterprise challenges in deploying AI agents safely at scale. The blueprint represents the sixth installment in Microsoft's Agent Factory blog series sharing best practices for adopting agentic AI.

The framework centres on Azure AI Foundry's layered approach combining identity management, guardrails, evaluations, adversarial testing, data protection, monitoring, and governance. According to Yina Arenas, Corporate Vice President of Product, Core AI at Microsoft, enterprises face mounting concerns where "CISOs worry about agent sprawl and unclear ownership" while security teams need guardrails connecting to existing workflows.

Microsoft identifies five core qualities for enterprise-ready agents: unique identity tracking, data protection by design, built-in controls, threat evaluation, and continuous oversight. The company states these qualities "do not guarantee absolute safety, but they are essential for building trustworthy agents that meet enterprise standards."

Azure AI Foundry provides several enterprise-focused capabilities. The platform will assign unique Entra Agent IDs to all agents, giving organisations visibility into active agents across tenants. The system includes agent controls featuring cross-prompt injection classifiers that scan prompt documents, tool responses, email triggers, and other untrusted sources to flag, block, and neutralise malicious instructions.

The platform integrates with Microsoft Purview, allowing agents to honour sensitivity labels and DLP policies so data protections carry through into agent outputs. Microsoft Defender integration surfaces alerts and recommendations directly in the agent environment, with telemetry streaming into Microsoft Defender XDR for security operations teams.

EY uses Azure AI Foundry's leaderboards and evaluations to compare models by quality, cost, and safety. Accenture is testing the Microsoft AI Red Teaming Agent to simulate adversarial prompts at scale across multi-agent workflows before deployment.

The blueprint addresses enterprise security teams' workflow integration needs while providing developers with built-in safety protections from project inception. Organisations can maintain data within tenant boundaries under existing security and compliance controls through standard agent setup configurations.

Microsoft positions trust as "the defining challenge for enterprise AI" with data leakage, prompt injection, and regulatory uncertainty identified as "the top blockers to AI adoption." The framework reflects industry movement toward integrated AI governance. Governance collaborator integrations with Credo AI and Saidot enable organisations to map evaluation results to regulatory frameworks including the EU AI Act and NIST AI Risk Management Framework.


Industry Classification & Scale Enterprise AI security and governance capabilities targeting organizations implementing AI agent systems across regulated industries.

Share this post
The link has been copied!