Cloud security company Wiz has introduced a set of AI-driven capabilities spanning agents, workflow automation, attack simulation, and application-layer protection, positioning AI as an operational layer embedded directly into cloud security workflows rather than a standalone tool.

At the core of the release is a new agent model integrated into the Wiz platform. These agents are designed to execute discrete security tasks, such as issue investigation and threat triage, using contextual data from across cloud environments. For example, an Issues Agent correlates findings, ownership, and remediation paths to prioritize fixes, while a SecOps Agent automates threat investigation by assembling telemetry, analyzing activity, and producing a verdict with supporting reasoning. This shifts early-stage analysis from manual workflows to machine-led triage, reducing time spent on investigation and allowing human analysts to focus on higher-complexity decisions.

Complementing these agents, Wiz introduced workflow orchestration capabilities that connect AI outputs directly to execution environments such as developer tools, collaboration platforms, and CI/CD systems. Rather than surfacing insights in isolation, the platform embeds recommendations into operational systems – for example, triggering pull requests or remediation actions – effectively closing the loop between detection and response. This reflects a broader enterprise trend toward integrating AI into existing systems of record rather than introducing parallel interfaces.

Wiz also expanded its platform with a Red Agent, designed to simulate adversarial behavior across cloud environments. Unlike traditional point-in-time penetration testing, the Red Agent continuously identifies exploitable paths and misconfigurations at scale, providing a persistent “attacker perspective.” This introduces continuous validation into cloud security programs, aligning with enterprise demand for real-time assurance rather than periodic testing.

In parallel, the company launched an AI Application Protection Platform (AI-APP), extending its cloud-native application protection model to AI systems. The platform focuses on securing the full lifecycle of AI applications, covering models, data pipelines, agents, and infrastructure, within a single control plane. This is paired with AI Security Posture Management (AI-SPM), which provides visibility into AI assets, maps their access and dependencies, and monitors runtime behavior to detect drift or anomalous activity.

Together, these releases indicate a shift from static posture management toward continuous, AI-assisted operations. The emphasis is on embedding intelligence into workflows, maintaining explainability in automated decisions, and extending security coverage to AI-driven systems themselves.

 Cloud security company Wiz has introduced a set of AI-driven capabilities spanning agents, workflow automation, attack simulation, and application-layer protection, positioning AI as an operational layer embedded directly into cloud security workflows rather than a standalone tool.

At the core of the release is a new agent model integrated into the Wiz platform. These agents are designed to execute discrete security tasks, such as issue investigation and threat triage, using contextual data from across cloud environments. For example, an Issues Agent correlates findings, ownership, and remediation paths to prioritize fixes, while a SecOps Agent automates threat investigation by assembling telemetry, analyzing activity, and producing a verdict with supporting reasoning. This shifts early-stage analysis from manual workflows to machine-led triage, reducing time spent on investigation and allowing human analysts to focus on higher-complexity decisions.

Complementing these agents, Wiz introduced workflow orchestration capabilities that connect AI outputs directly to execution environments such as developer tools, collaboration platforms, and CI/CD systems. Rather than surfacing insights in isolation, the platform embeds recommendations into operational systems – for example, triggering pull requests or remediation actions – effectively closing the loop between detection and response. This reflects a broader enterprise trend toward integrating AI into existing systems of record rather than introducing parallel interfaces.

Wiz also expanded its platform with a Red Agent, designed to simulate adversarial behavior across cloud environments. Unlike traditional point-in-time penetration testing, the Red Agent continuously identifies exploitable paths and misconfigurations at scale, providing a persistent “attacker perspective.” This introduces continuous validation into cloud security programs, aligning with enterprise demand for real-time assurance rather than periodic testing.

In parallel, the company launched an AI Application Protection Platform (AI-APP), extending its cloud-native application protection model to AI systems. The platform focuses on securing the full lifecycle of AI applications, covering models, data pipelines, agents, and infrastructure, within a single control plane. This is paired with AI Security Posture Management (AI-SPM), which provides visibility into AI assets, maps their access and dependencies, and monitors runtime behavior to detect drift or anomalous activity.

Together, these releases indicate a shift from static posture management toward continuous, AI-assisted operations. The emphasis is on embedding intelligence into workflows, maintaining explainability in automated decisions, and extending security coverage to AI-driven systems themselves.


DevSecOps for AI: Why 90% Stays the Same—and the 10% That Changes Everything
Is your DevSecOps pipeline ready for AI—or just ready for the AI you tested last week? AI systems behave probabilistically. The same prompt injection attack can succeed 50 times in a row, then fail completely the next minute. Traditional shift-left testing was built for determinism. AI isn’t. That gap is where risk lives. Three members of Google’s security advocacy team break down what actually changes—and what doesn’t—when AI enters your DevSecOps pipeline. You’ll learn: • Why 90% of AI security is still traditional security—and exactly where the novel 10% creates new exposure • Why DevSecOps transformations fail within a year—and the top-down cultural shift that prevents it • How the latest DORA research shows AI agents amplify existing practices, good or bad, at scale • What AI runtime security (e.g., Model Armor) does that a WAF cannot • Why AI logs capturing PII and system instructions in plain text demand a new approach to observability Key topics: Non-determinism in AI testing • Continuous evaluation vs. pre-deployment scans • Model Armor & runtime security layers • Sensitive data redaction in logs • Prompt injection defense-in-depth • Agentic workload security • WAF limitations with AI agents • DevSecOps governance & top-down culture For CISOs, DevSecOps leads, and security architects navigating AI adoption: the pipeline you spent three years building is mostly still valid. This session tells you exactly what to add. All viewers will receive a c’heat sheet’ compiling links galore courtesy of Aron Eidelman.

Share this post
The link has been copied!